|
1 | 1 | # Robotic-Arm-Control-with-Python |
2 | | -Control a 5-servo robotic arm using Python and an ESP32 (STEAMakers, MicroPython). Includes a Tkinter GUI, real-time manual control, sequence recording/playback, voice and text command execution, and a basic simulation to visualize movements. |
| 2 | +This project allows you to control a 5-servo robotic arm using Python and an ESP32 (STEAMakers, MicroPython). Includes a Tkinter GUI, real-time manual control, sequence recording/playback, voice and text command execution, and a basic 3D simulation to visualize movements. |
3 | 3 |
|
4 | | -This project was started in order to imrpove the comunication between the app that control the robotic arm, and the arm itself. As, at an initial point, it started as a simple comunication via bluetooth between an app created with app inventor and the code, uploaded to an ESP23 STEAMakers, with ArduinoBlocks. Finally, the project ended with all these funcitionalities: |
| 4 | +This project began as an attempt to improve communication between a controlling app and a robotic arm. Initially, it was a simple Bluetooth communication between an App Inventor app and an ESP32 STEAMakers board programmed with ArduinoBlocks. The project now includes all these features: |
5 | 5 |
|
6 | 6 | - Manual control with sliders |
7 | | -- Recording of sequences (considering the time) of moves, saving of the sequences (in local) and reproduction of them |
8 | | -- Simple voice and text-controlled command execution (Example: set the elbow to 120º), allows multiple orders at once. |
9 | | -- Visualization of a simulation of the arm in 3 |
| 7 | +- Recording of sequences (considering the time) of moves, saving of the sequences (locally) and playback |
| 8 | +- voice and text command execution (example: “set the elbow to 120º”), with support for multiple commands at once |
| 9 | +- Visualization of a simulation of the arm in 3D |
10 | 10 |
|
11 | 11 | About the AI: |
12 | 12 |
|
13 | | -The project is designed to use local AI with ollama (for example, I've used the models llama3, llama3.2:1b and mistral), or use free and online ones, as command-xlarge. Although they work sometimes, they're not very smart, as they're free or optimized in order to use them inside a computer. So they might give wrong or unprocessable answers, but the code is designed in order to manage them. Also, the AI is prepared to work in Spanish and provide answers in English, but it's as simple as change (or directly translate) the promts given to them, in the function ask() or ask_local(), in the script model_ai_robotic_arm.py . |
| 13 | +The project is designed to use local AI with ollama (for example, I've used the models llama3, llama3.2:1b and mistral), or use free and online ones, as command-xlarge. Although they work sometimes, they're not very smart, as they're free or optimized to run them locally. So they might give invalid or unexpected answers, but the code is designed to handle them. Also, the AI is prepared to work in Spanish and provide answers in English, but it's as simple as changing (or directly translating) the prompts given to them, in the function ask() or ask_local(), in the script model_ai_robotic_arm.py . |
| 14 | + |
| 15 | +The code in the computer communicates with the ESP32 through the computer ports (in my case, COM3. Change that and the baudrate if needed). The communication is created with pyserial, with the computer writing commands with the form ("S1/S2/...S5:0-180º", ex: "S3:120") and the microcontroller reading and processing them. Although it's designed for that, the code works perfectly without a microcontroller, and the graphic representation shows you what you're doing. |
| 16 | + |
| 17 | +All you've got to change before using the project: |
| 18 | + |
| 19 | +- Serial port and Baudrate (in this case, "COM3" and 115200) |
| 20 | +- GRAPHIC_ZERO and INIT_POSITIONS, with the best values to your own arm. You can execute the code, and later start improving the values. |
| 21 | +- Servo labels if English isn't your language |
| 22 | +- servo_freq and min_duty / max_duty in angle_to_duty, depending on the characteristics of your servos. |
| 23 | +- The pins where you connect your servos |
| 24 | +- API key Groq and API key cohere. Obviously, you’ll need to use your own — I’m not giving you mine. |
| 25 | +- The language translation when talking to the AI, if you change the language |
| 26 | +- The AI model used (change to any model you want, and change between the functions ask() and ask_local()) |
| 27 | +- The prompt for the AI. I tried various different prompts, but all give errors. This is the one with less errors I've seen, so don't change much its structure. Also, the AI is really slow sometimes, so be patient. If an error occurs, it's likely because the AI returned a generic or vague answer like "Okay, I'll reply..." instead of a proper command list — not because the system is broken. |
| 28 | +I honestly believe that the AI isn't really functional, but it's impressive to be able to connect an AI to a project like this. If you paid some premium AI, I strongly recommend you to connect its API to this project instead the options I'm giving, because once you've got an intelligent AI you can take this project wherever you want. |
| 29 | + |
| 30 | +All generated audio files and logs are saved in folders created automatically inside the project’s root directory. You don’t need to set absolute paths — the script checks if these folders exist and creates them if needed, using ensure_folder_exists(folder_name). |
| 31 | + |
| 32 | +This project is, in my opinion, really interesting in order to learn how to establish communication between hardware and software, in a more sophisticated way than ArduinoBlocks and App Inventor, for example. And an impressive example of how far a simple project like a robotic arm can go. The next step could be connecting the arm to a camera, with openCV. For example, it could detect colors, and depending on the color detected activate one sequence or another. You can do whatever you want, but this is a good base to start from. I also recommend, if you're looking for something simpler and easier to introduce, for example, in a class, to take a look at my other robotic arm project, which is essentially the same, but without the AI and graphical simulation. |
| 33 | + |
| 34 | +Lastly, if you're searching for introducing AI in any of your projects, the script (ModeloAIBrazoRobotico) is useful, despite its name, it can be used in any other project, because it allows you to: |
| 35 | + |
| 36 | +- Record audio, translate it and pass it to text |
| 37 | +- Pass text to voice, talk |
| 38 | +- Ask a local AI (ollama) or an online one, with different free models available (at least when I did the project, it's possible that in the future these models won't be available, or that there'll be better options) |
0 commit comments