Monday, February 05, 2007

Project is finished for now

The project report has been handed in today after we've spend a lot of time writing, correcting, rewriting and correcting it over and over again :)

We have decided to put all of our developed solutions on the web.

We have made four distinct Visual Studio Solutions and the program running on the NXT available for download.

They are:

GridLibrary : A number of classes compiled into a dll archive to represent the grid our robots act on, including an implementation of the A* (A-star) algorithm.

NXTRemoteControl : Another dll archive which, like iCommand, allows users to control the Lego Mindstorms NXT robots over Bluetooth.

MultiRoboticAgentsBulletin : A system of multiple simple intelligent agents each controlling a robot on the floor.

MultiRoboticAgentsNegotiation : A system of multiple BDI agents each controlling a robot on the floor.

Furthermore we have made our report available for download.

We are planning to put up a number of videos of our four robot agents solving tasks on the floor.

Wednesday, December 06, 2006

2 NXT's collaborating to move 3 objects from a grid

We now have a working multi-agent system. The video below shows 2 NXT's removing their respective objects from them lego grid. The video is low quality due the fact that it's hosted at google video. A high res version is available here (25,0 MB).

Our updated problem is now defined as follows:

Two teams, black and silver, of modified tribot agents are collaborating to move a number of black and silver items in a closed world back to their individual home bases (their initial postions). Only silver team tribots can remove silver items, and the black items can only be removed by the black team. If a tribot encounters an item which it is not able to remove, it must communicate the finding to the other tribots.

For all danes who did not get to read the twopage special on our project in Ingeniøren (The Engineer), here is a link to the online version with pictures.

Friday, December 01, 2006

Single agent identifying and retrieving objects

We have made a lot of progress since the last post. We have a fully functional implementation of the A*-path planning algorithm, which has been implemented in our C# agent. Also we have made a new grid with genuine lego plates, where we have put some black tape and pieces of reflective foil to make the actual grid, which allows the NXT to navigate.

We currently have a working solution that enables one NXT controlled by an agent on the .NET platform to identify and retrieve objects of a desired color. We shot a video of a run:

At the moment we are working on putting more players in the game. Hopefully we will have a working solution on wednesday.

Friday, November 17, 2006

"On the floor"-action

Today we have made some satisfying progress. The status so far on the different parts of our project is as follows.

At the moment, we operate with three different colors on the floor, such that the NXT can differentiate between floor, line and intersection in our grid. We have a black line, yellow intersections and gray floor. This is not too good, because the floor and intersections is both brighter than the line. This results in problems when we want to distinguish between the two because it is possible for the light sensor to read a value that is similar to gray, when it is situated between the black line and a yellow intersection, hence both reads from black and yellow and gives a mean that "looks like" gray. Ie. readings can be ambiguous.

The solution to this problem is to create a grid where the line has the intermediate brightness between intersection and floor. For example, let the floor be brighter than the line and the intersections darker. In this environment, no ambiguous readings can occur.

NXT program:
The program running on the NXT is basically a big switch-statement.
It runs forever, and listens on a Bluetooth inbox. Depending on what message is delivered (from our .NET application) in this inbox, the NXT can turn left, turn right, follow the line to next intersection, turn around, pick up a ball, release the ball or leave the ball. (I might be missing some actions...)

This program is almost complete for our final application, and only minor correction/additions is required.

.Net application:
By now, we have an application that can control one NXT in the grid. It takes initial values, being NXT position and direction. For example, x=1, y=1, direction="North".

We can then input a new position, and the program will calculate a shortest path to this new position from it's current position. For this purpose we use an implementation of the heuristic algorithm A*. And then (of course) the NXT will go to this new position following the calculated path.

Quite nice to finally have something working "on the floor".

Now the real fun begins. We will implement a complex agent system with multiple agents that will cooperate to solve a task in the grid.

Friday, November 10, 2006

Different lightsensor values

After testing all the sensors we found out that two of the five sensors showed different values. Instead of calibrating the sensors we now set them to default. This can be done in NXT-G. We now hope that the ten other lightsensors we ordered return the same value as the three sensors.

More hardware problems

Yesterday everything seemed to work for us. We made a grid out of black isolationtape with yellow dots at the intersection. Yellow seemed to return the best value when light was reflected on it.
Then we got the communication between computer and NXT up and running. That means the NXT will stop and send a message if a yellow dot occurs or if the bumper is pressed. It then waits for a command from the computer.

Today the lightsensors all returned different values and had a hard time distinguish between the grey floor and the yellow dots. This could be because of the NXT is running low on battery but it could also be the change of ambient light. We now try a solution where the the line returns a value between the floor value and the intersection value.

Wednesday, November 01, 2006

New goals for the project

After realizing a number of major difficulties with creating a lifting mechanism, we've decided to fall back on Lego's Tribot design for our project.

This decision will perhaps set us back a bit, but we're convinced that it will pay off in the end.

The altered problem is now defined as follows:

Two teams, red and blue, of modified tribot agents are collaborate in removing a number of blue and red items in a closed world. Only red team tribots can remove up red items, and the blue items can only be removed up by the blue team. If a tribot encounters an item which it is not permitted to remove, it must communicate the finding to the other tribots.

The world will start off being a simple grid made of isolation tape, where the agents can navigate around. When an agent is communicating it will be shown by an extra lightsensor being switched on and off, signaling red light, on the tribot.