Lab Course Humanoid Robots

Robots are versatile systems, that provide vast opportunities for active research and various operations. Humanoid robots, for example, have a human-like body, and thus can act in environments designed for humans. They are able to, e.g., climb stairs, walk through cluttered environments, and open doors. Mobile robots with a wheeled base are designed to operate on flat grounds to perform, e.g., cleaning and service tasks. Robotic arms are able to grasp and manipulate objects.

Participants will work in group of 2 or 3 on one of the possible topics.

At the end of the semester each group will give a presentation and demonstration of their project accompanied by oral moderation. The whole presentation should be approximately 10 minutes long. Every member of the group should present his/her part in the development of the system in a few sentences/slides. When the presentation is complete, each group will be asked a few questions by the HRL staff members or preferably the other students. Everyone is required to be present and to watch the presentation of the other groups.

Aside from the final demonstration, every group is required to submit a lab report. The report is due on the morning before the demonstration. Please describe the task you had to solve, in what ways you approached the solution, what parts your system consists of, special difficulties you may have encountered, and how to compile and use your software. Please include a sufficient number of illustrations. Apart from the content, there are no formal requirements to this document. It is sufficient to submit one lab report per group, it must be pushed to the group's git repository before the lab presentation.

The grade of the lab will depend on the final presentation and how well the assigned task was solved (30/70).

Participants are expected to have Ubuntu Linux installed on their personal computers. The specific requirements for each project are quoted below.

The mandatory Introductory Meeting take place in person (see important dates below).


Semester:

WS

Year:

2024

Course Number:

MA-INF 4214

Links:

Basis

Course Start Date:

09.10.2024

Course End Date:

19.03.2025

ECTS:

9

Responsible HRL Lecturers:


Important dates:

All interested students have to attend the Introductory Meeting. In the Introductory Meeting, we will present the projects, the schedule, the registration process, and answer your questions.

09.10.2024, Wednesday, 10:00-11:00hs, Room 1.047Introductory Meeting (mandatory)
[presentation slides]
13.10.2024, SundayRegistration deadline and topic selection on our website
20.10.2024, SundayRegistration deadline in BASIS

23.01.2025, Thursday

Midterm lab presentation

19.03.2025, WednesdayLab presentation and deadline for lab documentation

After the Introductory Meeting, each participant arranges an individual schedule with the respective supervisor.

Registration

The registration is closed.

Report template

Please use the following template for the written summary:
[Report template]

Projects:

LLM meets robotics (industry project)
Supervisor: Benedikt Kreis

The goal of the project is to pick unknown objects from a moving conveyor belt and sort them into different bins. Instead of hard coded control commands, the robot has to exploit capabilities of LLMs (Large Language Models) and VLMs (Visial Language Models). This project is offered in collaboration with a local industry partner.

Robotic sorting using Reinforcement Learning
Supervisor: Ahmed Shokry

The task is to sort cubes according to some features using a robotic arm. The robot will use Reinforcement Learning and information from RGB-D camera to identify, locate, and pick the cubes and place them in the correponding bins.

Learning-based navigation among dynamic obstacles
Supervisor: Jorge de Heuvel 

Using state-of-the-art deep reinforcement learning methods, you will teach a robot to navigate among static and dynamic obstacles. Here, a neural network is trained that directly controls the robot based on observations through a LIDAR sensor. You will design your own learning architecture that includes the network design for a solid understanding of the for the robot partially observable environment, and the reward function for informative and efficient learning of the navigation objectives. After the robot successfully has trained in Pybullet-based simulation, the goal is to transfer the policy onto a real robot using ROS.

Quadruped for obstacle courses
Supervisors: Murad Dawood/Shahram Khorshidi

The goal of this project is to enable the quadruped to navigate through obstacle courses, while using perception to adapt to different velocities, obstacles, and staircases.

Autonomous racing
Supervisor: Nils Dengler

The task is to learn a reinforcement learning agent to autonomously drive through a given race track. For simulation training we use the pybullet simulation given in [1]. After successfully training the agent, we deploy it on a real F1/tenth cart and drive a track in the hallway of our building.

[1] https://github.com/gonultasbu/racecar_gym-1

Rock, paper, scissors
Supervisor: Nils Dengler

The task is to implement a version of Rock Paper Scissors on our 5-finger Psyonic hand [1].

[1] https://www.psyonic.io/ability-hand

P2G: Predict to Perceive and Grasp for Mobile Manipulation
Supervisor: Rohit Menon

This project aims to use shape prediction for enabling a mobile manipulator to aid both next best view planning as well as for grasp pose detection. With shape prediction, the robot should be able to complete the manipulation task with higher success rate and lower time.

Find the viewpoint!
Supervisor: Sicong Pan

The task is to find the viewpoint of the given RGB image within the context of an eye-in-hand tabletop configuration.

Social force model for crowd simulation
Supervisor: Subham Agrawal

The task is to implement social force model (SFM) for crowd simulation and visualize it in a 3D environment.

Robot navigation using LLM
Supervisor: Xuying Huang

This project aims to use a large language model (LLM) to achieve robot navigation. The robot should be able to understand spoken instructions, process the command using the LLM, and convert it into specific actions (such as moving to the kitchen).