Coder Social home page Coder Social logo

apockill / uarmcreatorstudio Goto Github PK

View Code? Open in Web Editor NEW
81.0 81.0 39.0 323.74 MB

uArm Creator Studio is a Visual Programming Language for robot arms, with a heavy emphasis on computer vision and usability for both low experience and high experience programmers. It's written entirely in Python, and supports python scripting within the application.

Python 81.39% Arduino 0.33% Makefile 1.15% CSS 0.88% JavaScript 1.52% HTML 0.46% XSLT 0.10% C 0.32% C++ 13.86%

uarmcreatorstudio's Introduction

uArm Creator Studio

uArm Creator studio is a visual programming language heavily inspired by YoYo Game Maker, but instead it's for programming robot arms. This software has a heavy emphasis on making computer vision accessible to users, and making it dead simple to program your robot arm to do complex tasks. YoYo Game Maker inspired me as a child because it taught me the basics of programming when I was ten, and let me learn more and more as I grew more experienced. It scales from low experience to high experience applications very well, which is why I thought of it so much while making this.

I originally started this project because I didn't like how much work it took to get my robot arm to do simple tasks. I wanted a quick and easy way to move the robot to different way-points, and maybe pick up objects and drop them. The project scope quickly evolved, and I decided that I wanted it to be focused heavily on computer vision. I figured that most folks don't have the time to build their own libraries for integrating computer vision with robot arms, and that they might want to just use vision to accomplish simple tasks. However, in order to make this worthwhile, I wanted to make it incredibly easy to use- even for non-programmers- while not alienating experienced programmers.

Furthermore, I didn't want to spend a huge amount of time making Visual Programming Language (VPL) focused code without also making an equally useful API. Thus, I split the project up into two sections: GUI code, and Logic code. By doing this, it's entirely possible to do everything you could do using the click-and-drag interface, entirely in python script. Furthermore, it's possible to make a script using the GUI then run it without any GUI.

Getting Started

Since you're looking at the Github page, I'll assume you don't want to download an .exe and run the program from there, and that you instead want to run it from source. Here's how this goes down!

Prerequisites

This package uses several libraries within it. It should be entirely multi platform, but I appreciate any feedback that says otherwise.

  • python 3.4.4

  • Download

  • PyQt5:

  • Installation Tutorial

  • OpenCV 3.1

  • How to for Windows

  • How to for Linux

  • pyserial

  • Use pip install pyserial in command line

  • numpy

  • Use pip install numpy in command line

  • PyInstaller (optional)

    • I use PyInstaller to package everything into an EXE. Use my Build.spec file to build everything without hassle, and to include icons.

Installing

Clone the repository, extract it, and keep the structure the same. If you have all of the dependencies and are ready to roll, then open MainGUI.py and run it. Assuming everything works, a window will pop up, and you're in business! If not, email me at [email protected] and we can hash out the issue. I'm interested in figuring out what kinds of problems people run into to make the build process easier.

If you have a uArm: Make sure you have the right communication protocol uploaded onto your uArm's Arduino board, or else this won't work at all. This GUI uses a custom communication protocol (although that might change soon- uFactory is adopting my com protocol). To make sure, go to Robot Firmware and import the approprate libraries from the Libraries To Import folder, and then upload the .ino file in the CommunicationProtocol folder to your uArm.

Project Structure

The project is seperated by "Logic" and "GUI" elements. This was to force myself to write completely GUI independent logic code, thus you can do anything you can do in the GUI by scripting directly with Logic code. It's a pain, but it's possible!

  • Logic Overview
    • Commands.py and Events.py
      • This is where all of the logic for each command and event is defined. If you make a custom Command, you must have a CommandsGUI.py implimentation and a Commands.py implimentation, with the same name- that's how the Interpreter instantiates the object from a string. Vice versa for creating custom Events
    • Environment.py
      • This is a singleton object that holds the Robot, VideoStream, Settings, and ObjectManager classes.
      • This was done since commands and events need various things during instantiation, and environment is a great way to pass them around. Furthermore, it simplified the seperation of Logic and GUI tremendously.
    • Interpreter.py
      • This is, well, the interpreter of the project. When you press "play" on the gui, all of the code gets saved as a JSON, the exact same as the save format the project uses, then passed to the Interpreter, which then instantiates all of the events from Events.py and commands from Commands.py.
      • The Interpreter can be run threaded or not threaded. It's designed for both.
      • The Interpreter can run interpreters within it. This is how the "Run Task" and "Run Function" commands work- by generating an interpreter with a seperate script and running it.
      • Interpreters can run recursively, as well, and catch recursion limit exceptions and call for the script to end.
      • The interpreter also handles the namespace for variables that are created and used during the script. It has a function to reset the namespace as well.
      • Since exec and eval functions are used in the Interpreter, it is incredibly unsafe to run anyone elses .task files without checking the commands to make sure they are safe. Just like running code from someone else, make sure to check it first! I am not responsible for what other people do with this software.
    • Vision.py
      • This handles all vision requests throughout the GUI.
      • All tracking in the GUI works as such: You "add" a target to track, and vision passes work off to a VideoStream thread to look for objects. Then, you query Vision if the object has been seen recently, and it will look through a history of "tracked" objects, and tell you the latest time the object was seen, it's position, orientation, and accuracy. More info in the module.
      • It holds the definitions of PlaneTracker and CascadeTracker, which are the trackers I use for different tracking tasks. Almost all tracking is done with PlaneTracker, but I do face tracking/eye tracking/smile tracking using CascadeTracker. These trackers should not be called directly, always use the functions inside of Vision to use them.
    • Video.py
      • This holds VideoStream, which is my threaded video capturing class, which can also do computer vision work by passing "work" functions, or "filter" functions to the VideoStream. No Vision code is actually in here.
    • Robot.py
      • This is a wrapper around CommunicationProtocol.py which caches position and makes moving easy.
      • Since connecting to Serial can take a while, it has a threaded connection function, which should last 1-5 seconds then end the thread. Thus, all functions designed to be thread safe.
    • CommunicationProtocol.py
      • This is what you change if you want to make a custom robot arm compatible with this software.
      • It's also thread-safe. I still don't recommend abusing that though, since I can't imagine a use case for sending commands from two seperate threads.
    • RobotVision.py
      • This is a module that has functions that use both the Robot and Vision. It's a convenient way to reuse complex vision/robot functions instead of having repetative code in Commands.py
    • ObjectManager.py and Resources.py
      • ObjectManager is what handles the saving and loading of things like Motion recordings, Vision objects, Functions, or whatever else might be added in the future.
      • Resources.py is where the Trackable, MotionPath, Function objects are defined. All new resources should be defined in Resources.py, because that's where ObjectManager searches when instantiating objects. It parses the filename, the first word is the "type", then checks Resources.py to see if that type exists, and if it does, it creates that object and gives it the directory to load it's information from.
    • Global.py
      • Holds a custom print function, which can redirect prints to the GUI's console when the GUI is being used.
  • GUI Overview
  • MainGUI.py
    • Handles the main window, settings page, and is the center for all things GUI
  • ControlPanelGUI.py
    • This contains the EventList, CommandList, and the ControlPanel widgets, which are essential. EventList is the list that holds the events, to the left of the CommandList. Each "Event" item holds its own individual CommandList reference. The ControlPanel handles which CommandList is currently in view.
  • CommandsGUI.py
    • Stores all of the windows for the commands, and the click-and-drag aspect of things. If you want to add a new command, you go here first.
  • EventsGUI.py
    • Stores all of the Events that can be placed in the program. If you want to add a new event, you go here first.
  • CalibrationsGUI.py
    • This holds the window and logic for calibrations that the user can do with the robot. If you want to run without a GUI, just use the GUI for calibration which get automatically saved in Resources/Settings.txt, then run your script using the saved calibration.
  • ObjectManagerGUI.py
    • This handles the "Resources" menu on the toolbar, and works with ObjectManager.py to save new objects.
  • CommonGUI and CameraGUI:
    • These are convenient widgets I use throughout the project.
  • Paths.py
    • What you would expect- holds paths for icons and other GUI elements.

Authors

Alex Thiel

I'm a student studying a bachelors in robotics at ASU. I'm working as a Software Engineer at uFactory, developing uArm Creator Studio.

Github

Youtube

Contact me at [email protected]

Contributing

王诗阳 Shiyang Wang - Icon Design

Tyler Compton - Created the UCS icon, provided valuable advice for certain language design questions, and helped in many other ways.

License

This project is called uArmCreatorStudio. uArmCreatorStudio is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. uArmCreatorStudio is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with uArmCreatorStudio. If not, see http://www.gnu.org/licenses/.

Acknowledgments

Thank you to everyone at uFactory for giving me free reign over this project during my internship, and allowing me the space to be creative and develop new ideas without fear of the consequences of failure.

Special thanks to 周亚琴 Poppy Zhou and 罗俊茂 Lorder Luo for helping me at every step of the way with marketing, promotion, bug testing, and much much more.

uarmcreatorstudio's People

Contributors

apockill avatar velovix avatar wisechengyi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

uarmcreatorstudio's Issues

Arm not going low enough

Hi,

When an object is recognised and the software try to automatically pick it up, it oftently stop 1cm on top of it. ( even though I made the uarm calibration correctly).

I suggets lowering the ground value, or getting a tough feedback via the tip sensor

cinematic motion algorithm

hi, sorry to open ticket for my question , but actually I'm little lost to find the methods can convert input location (Cartesian , polar) to the optimized cinematic motions of each axis? for example how to defined the best path from A to B coordinates and how it converted to the cinematic motion for each axis? what is the optimization function & type ?how to define cinematic motion parameters of specific arms? I really appreciated for your hint

Adding test framework and continuous integration

As the project grows, it becomes harder for quality control, i.e. to ensure any incoming change does not break anything in the past. E.g. we can use systems like TravisCI to trigger a build/test on every PR commit.

Challenges:

  • Building opencv library in CI
  • GUI testing for PyQt

I can probably take the initial hammering to get things going on the core library side, whereas places involving ^ challenges may require more thought.

calibration range

Currently the ranges are hardcoded

zTest = int(round(zLower, 0)) # Since range requires an integer, round zLower just for this case
for x in range( -20, 20, 4): testCoords += [[x, 15, 11]] # Center of XYZ grid
for y in range( 8, 24, 4): testCoords += [[ 0, y, 11]]
for z in range(zTest, 19, 1): testCoords += [[ 0, 15, z]]
# for x in range( -20, 20, 1): testCoords += [[x, 15, zTest]] # Center of XY, Bottom z
# for y in range( 8, 25, 1): testCoords += [[ 0, y, zTest]]
# for z in range(zTest, 25): testCoords += [[ 0, 15, z]]
for x in range( -20, 20, 4): testCoords += [[x, 15, 17]] # Center of XY, top z
for y in range( 12, 24, 4): testCoords += [[ 0, y, 17]]
direction = int(1)
for y in range(12, 25, 2):
for x in range(-20 * direction, 20 * direction, 2 * direction):
testCoords += [[x, y, zTest]]
direction *= -1
, but in practice, the real range also depends on how camera is mounted or zoomed in. For example, the effective range my camera can see for x axis is (-10, 10) whereas it's hardcoded (-20, 20). Hence I would like to propose to collect (x,y,z) at the 4 corners (mostly x and y) in order to determine the range and how fine-grained the calibration should be.

Example partial diff. Actual values should come from user assistance. Also append is used instead of += for performance reasons because += is making a new array on every use.

(venv)[tw-mbp-yic uArmCreatorStudio (master)]$ git diff
diff --git a/CalibrationsGUI.py b/CalibrationsGUI.py
index d50a6d3..97e3328 100644
--- a/CalibrationsGUI.py
+++ b/CalibrationsGUI.py
@@ -798,24 +798,22 @@ class CWPage5(QtWidgets.QWizardPage):
 
         # Test the z on 3 xy points
         zTest = int(round(zLower, 0))  # Since range requires an integer, round zLower just for this case
-        for x in range(  -20, 20, 4): testCoords += [[x,  15,    11]]  # Center of XYZ grid
-        for y in range(    8, 24, 4): testCoords += [[ 0,  y,    11]]
-        for z in range(zTest, 19, 1): testCoords += [[ 0, 15,     z]]
+        for x in range(  -10, 10, 1): testCoords.append([x,  15,    11])  # Center of XYZ grid
+        for y in range(    8, 24, 4): testCoords.append([ 0,  y,    11])
+        for z in range(zTest, 19, 1): testCoords.append([ 0, 15,     z])
 
         # for x in range(  -20, 20, 1): testCoords += [[x,  15, zTest]]  # Center of XY, Bottom z
         # for y in range(    8, 25, 1): testCoords += [[ 0,  y, zTest]]
         # for z in range(zTest, 25): testCoords += [[ 0, 15,     z]]
 
-        for x in range(  -20, 20, 4): testCoords += [[x,  15,    17]]  # Center of XY, top z
-        for y in range(   12, 24, 4): testCoords += [[ 0,  y,    17]]
+        for x in range(  -10, 10, 1): testCoords.append([x,  15,    17])  # Center of XY, top z
+        for y in range(   12, 24, 4): testCoords.append([ 0,  y,    17])
 
-
-
-        direction  = int(1)
+        direction = int(1)
         for y in range(12, 25, 2):
-            for x in range(-20 * direction, 20 * direction, 2 * direction):
-                testCoords += [[x, y, zTest]]
-            direction *= -1
+          for x in range(-10 * direction, 10 * direction, 2 * direction):
+            testCoords.append([x, y, zTest])
+          direction *= -1

Exception Robot Not Responding while connecting to port /dev/cu.usbserial-AI04I0QM

(venv)[tw-mbp-yic uArmCreatorStudio (master)]$ PYTHONPATH=/opt/twitter/Cellar/opencv3/3.1.0_4/lib/python3.5/site-packages:$PYTHONPATH python3 MainGUI.py 
Environment    Loading Settings
Resources/Objects/
Video          Starting videoStream thread.
Video          Setting camera to cameraID 0
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: cHRM chunk does not match sRGB
GUI            No events selected
GUI            No events selected
GUI            No event selected. Hiding buttons.
Video          SUCCESS: Camera is connected to camera 0
GUI            Opening Devices Window
GUI            Apply clicked, applying settings...
Environment    Saving setting: robotID
Video          Tried to create mainThread, but mainThread already existed.
Robot          Setting uArm to /dev/cu.usbserial-AI04I0QM
Robot          Thread Created
Video          Setting camera to cameraID 0
Cleaned up camera.
Video          SUCCESS: Camera is connected to camera 0
Communication  ERROR: Exception Robot Not Responding while connecting to port /dev/cu.usbserial-AI04I0QM
Robot          FAILURE: uArm was unable to connect!
Environment    Saving setting: robotID

The firmware is 1.7.4. It let me select the usb port, and uarm started moving a bit, but after a few seconds the exception popped out. Not sure how I can proceed there.

Option to disable camera

Currently once camera is selected there does not seem to be a way to turn it off and it burns a decent portion of os resources. If you are okay with the idea, I can send out a PR.

Thanks,
Yi

Any advice for a webcam?

I have the choice between a cheap camera (720p, 3Mpx) or a more expensive one (which output 1920x1080).
Is it worth taking a high quality camera for this software?

Thanks for your job :)

Exception when creating vision object on Linux

An exception occurs after selecting an area when creating a new vision object. The exception is The data should normally be NULL!.

I fixed it by compiling OpenCV source directly from Github. The newest release of OpenCV 3 does not have the fix yet.

Increase camera resolution for better recognition

Hi,

I have a pretty decent camera (latest logitec c922, which is capable of outputing full hd) and a very big and very detailed marker, but the program can't detect more than ∼490 points, which isn't enough for accuracy.

I have tried : update camera driver ( I'm on window 10, so it should be okay), bigger and more detailed marker, better lighting, manual focus, zooming the camera via the logitech interface (which by the way reduce my FOV) but no success ...)

Any idea ?

The solution might be to increase the video input resolution (my computer is probably capable of processing bigger image).

Is the input resolution hardcoded anywhere ? I can't find it.

Thanks for your increadible work !
Is it a good idea to add color to the marker ?

uArmCreatorStudio for Lite 6

Hi,
I have a robot Lite6 from uFactory and I would like to add some artificial vision as you have in this programm. Would it be possible to use the uArmCreatorStudio in a Lite 6 robot?
If this could be the case, how should I proceed?
Thanks a lot!
BR,
Juan

Using ucs with 6 axis robot arm

How can I use this program with a 6 axis robot arm
What modification s will be required in the program
My robot runs marlin based firmware
I can even adapt it to uarm custom protocols

High dpi scaling problem

Hello,

The software doen't scale on an high dpi screen (4k), as a result, everything looks very small.

contributing guide

Hi,

I was wondering if we can have a contributing guideline, as things are still a bit unclear to me.

  1. What shall we use to track the issues, this repo, uarm's fork, the forum?
  2. What should the PR go? this repo, or uarm's fork?
  3. Do you think it would be a good idea to a slack team for dev conversations?

Thanks,
Yi

2 cents on licensing (feel free to ignore)

I'm not a license expert, so only mean to provide some generic ideas in the long run.

I noticed that this software and the libraries it uses including PyQt are under GPL license. IIUC GPL is more restrictive compared to MIT or Apache, so sometimes it discourages businesses from using them if the intention is to use it nut to keep the code private. Given a lot of times businesses drive OSS projects, the situation may not be most ideal.

If GPL wasn't intentionally chosen, and you think it's a good idea for this software to be more open, here may be some options:

  1. Isolate the GUI code from core library, so the core library can be under another kind of license which does not touch GPL software.
  2. Trying other more open Python GUI libraries (I'm not sure how good other libraries are compared to PyQt) and steer the entire repo from GPL.

Camera setting error with opencv 3.2

It was erroring out during camera setup. Didn't dig too much, but looks like some checks were missing before initialization. I was able to work around the issue by using opencv 3.1 to set up the camera first, then launch again with opencv3.2.

Good thing is that opencv3.2 has fix for the tracking issue, so it does not error out at calibration.

Robot          ERROR: Tried setting uArm when it was already set!
GUI            Opening Devices Window
OpenCV: out device of bound (0-0): 1
OpenCV: camera failed to properly initialize!
OpenCV: out device of bound (0-0): 2
OpenCV: camera failed to properly initialize!
OpenCV: out device of bound (0-0): 3
OpenCV: camera failed to properly initialize!
OpenCV: out device of bound (0-0): 4
OpenCV: camera failed to properly initialize!
OpenCV: out device of bound (0-0): 5
OpenCV: camera failed to properly initialize!
OpenCV: out device of bound (0-0): 6
OpenCV: camera failed to properly initialize!
OpenCV: out device of bound (0-0): 7
OpenCV: camera failed to properly initialize!
OpenCV: out device of bound (0-0): 8
OpenCV: camera failed to properly initialize!
OpenCV: out device of bound (0-0): 9
OpenCV: camera failed to properly initialize!
2016-12-23 21:10:52.581 python[7393:10326436] An instance 0x107bec070 of class AVCaptureDALDevice was deallocated while key value observers were still registered with it. Observation info was leaked, and may even become mistakenly attached to some other object. Set a breakpoint on NSKVODeallocateBreak to stop here in the debugger. Here's the current observation info:
<NSKeyValueObservationInfo 0x10dc83370> (
<NSKeyValueObservance 0x1080c4bc0: Observer: 0x10c4d7470, Key path: open, Options: <New: NO, Old: NO, Prior: NO> Context: 0x7fff776e53c0, Property: 0x1080c3fb0>
)

opencv:

$ git describe
3.2.0

uArm Servo Speed---Creator Studio

Hello,

I'm new to the uArm world and am using some projects with a uArm metal to help me learn very basic programming.

I had some success using the uArm Creator Studio "custom" Python tool that allows one to enter script. However, the remaining issue I am having is with servo speeds. The documentation states that the servo speed settings are in cm/sec and it seems that the lower speed minimum is still pretty fast (i.e. ~1 cm/sec). Is there a way to change the default servo speeds to something very slow? I know that if programming directly through Arduino one can setup servo speeds on a 0 to 255 scale in order to achieve fine tuned control

uarm.setServoSpeed(SERVO_R, 0); // 0=full speed, 1-255 slower to faster
uarm.setServoSpeed(SERVO_L, 0); // 0=full speed, 1-255 slower to faster
uarm.setServoSpeed(SERVO_ROT, 50); // 0=full speed, 1-255 slower to faster

However, is there such a capability through uArm Creator Studio?

Thank you very much for the assistance.

Regards

Cross Platform Compatibility - Hard coded directory separators

Hi.

Just downloaded it to have a poke around as it looks very impressive on the demo video for rapid development.

I might have done something wrong, but I had to do a quick find and replace on the directory string separators to get it up and running on Linux. Haven't investigated further but it might be that os.path.join() could be used for future portability.

Python 3.5.2
PyQt5 5.6.1
OpenCV 3.1.0-3

Great work. Looking forward to exploring.

Communication protocol- issue with wrist

The current version of the COM protocol. [ssS#V#] which sets servos will fail when setting wrist servos, because the uArm Library tries to get calibration data for the wrist servo- but that data doesn't exist, so it offsets the wrist servo and sets it to zero.

This should be fixed in the next version of the protocol, but that's not on my side of things.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.