Archive for the ‘Artificial Intelligence’ Category

Best Robotic Legs Ever?

Friday, July 6th, 2012

A very interesting development has been reported by the Daily Disruption News Desk regarding robotic legs that are claimed to “fully model walking in a biologically accurate manner.” This will come as good news for spinal cord injury patients. Those of us who follow developments in artificial intelligence and robotics will likely take note as well.

I read this account with fascination, and immediately wanted to sketch out my understanding in model form. Extending the colloquialism, to a hammer, everything is a nail – to a systems engineer, everything must be modeled. As conveyed in the article, human walking is controlled by a neural network called the central pattern generator (CPG), which is anatomically located in the lumbar region. It’s purpose is to generate rhythmic muscle signals. The researchers said in its simplest form the CPG can be modeled by a neuron pair that each fire signals in alternating fashion.

To complete this model in addition to the neural architecture, the robot needs muscle-skeleton and sensory feedback components. Roughly, this system can be modeled as shown:

I could be wrong, but this is how I understand the Robotic Leg System!

Co-author of the study Dr Theresa Klein was quoted as saying “…we were able to produce a walking gait, without balance, which mimicked human walking with only a simple half-centre controlling the hips and a set of reflex responses controlling the lower limb.”

So, did you catch that? That was quite a surprising statement. Two things are totally counter intuitive to me. First, she said the robot works “without balance.” Does that mean that this robot does not need an “inner ear” to balance? Second, the CPG apparently apparently converts coarse motor commands into forces applied at the hip joint only. The “dangling part of the leg, the lower limb, just follows reflexively, implementing easily programmable commands that simple follow what is happening up stream at the hip.

Another implication of this analysis is that the brain proper plays less of a role in controlling gait that I would have guessed.

This would be a good time to confess that I could be totally wrong in my interpretation of this research and its result; I am learning as I go.

Speaking of which, the CPG model of this study is apparently a good facsimile of how gait is refined from early childhood steps through later improvement during the maturing process. The CPG in humans gets better over time as it learns the best walk to walk by repetition.

This is exciting as I can see similarities between this system and what I am learning in my Udacity.com artificial intelligence class. The evolving understanding of complex bio-mechanical systems as well as advances in AI make this a great time to be a student of such things.

Top Down Robot Car!

Friday, June 29th, 2012


So with a title like Top Down Robot Car I bet you think I’m talking about a convertible version of a robot car. Sorry, not this time, although the reason a robot might want to implement the feature of ‘roof retract-ability’  could be an intriguing question – improved chances of picking up robot-girls perhaps?

In this case I’ll be talking about the design process for specifying and modelling a robot car using a top-down approach.

Disclaimer: I have never built a robot car before, in fact, I am just learning about artificial intelligence in my current Udacity class (cs373) which uses a robot car as the sample problem. I am however a systems engineer with experience in the aerospace industry. As such I thought it might be beneficial to compare and contrast a method familiar to me (top down) with the approach being used in the class (bottom up).

In my previous post, I essentially captured a bit of the bottom up approach being used to teach the class. One of the lowest level questions possible, was the first one asked; from the robot perspective – where am I? That question, and its answer in implementation, embodies the function of ‘localization’. The Data Flow Diagram in that post described the input-process-output for localization in functional analysis notation.

Here I will sketch out a top down design wherein localization but one of a number of lower level functions. Note that part of this approach is similar to the problem solving method taught by Peter Norvig in Udacity, cs212, the Design of Computer Programs.

The top level requirements of a car might be – provide point to point transportation over a road network while observing driving rules and avoiding collisions with other objects in the environment.

A simple concept inventory then might be:

  • transport (the car, or platform)
    • location
    • speed
  • environment (the network of roads, traffic signals, etc.)
  • points (different locations along the path of the car, starting with the origin, ending with the destination)
  • sensors (detect environment, state, and objects)
  • guidance (what is the next point along the path?)
  • control (what speed and steering commands will get to the car to the next point on its path?)
  • rules (constraints on how car moves in environment)
  • objects (anything that enters the car’s area of concern)

Upon translation of these concepts to a data flow diagram format several important distinctions have been made as described below.

a simple data flow diagram

a rough draft, likely to need revisiting as class proceeds

Things external to the system are abstracted as rectangular shapes. Functions internal to the system are enclosed in ovals, and information that must be shared is represented by a data store symbol, two parallel lines. Data flows are represented by directional arrows.

One other important note about abstraction. The class is about artificial intelligence as applied to cars; it is not about cars themselves – cars are mundane. As such the car is simplified into the ‘transport’ function. At the end of the class we’d just buy a car and bolt on all the required sensors, and associated AI componentry, as an appliance. Important functions like sense, guide, control are isolated and shown to interact with the car (or ‘platform’) via data flow. Understanding of these is germaine to the class and warrant standing alone for the purposes of functional analysis.

This crude model will have to be revisited as we go, but for now it suffices as a good starting point, and provides a reasonable introduction to top down design principles.

Will Robots Rule the Road?

Thursday, June 28th, 2012

Robot cars are a reality, having already proven themselves in desert and urban road settings, accumulating thousands of miles of travel in the process. Sebastian Thrun was the leader of a Stanford University team whose entrant, Stanley, won the DARPA Grand Challenge in 2005. Sebastian is also the co-founder of the online university Udacity.com. Fortunately I will benefit from both of these facts because, as of yesterday I enrolled in Udacity’s class, cs373, Artificial Intelligence (Programming a Robotic Car), as taught by Sebastian himself. That’s kind of like being asked to land on the moon with Neil Armstrong as pilot.

Only one lecture in, but I can tell already it is going to be a great course. Taught using python, it seems to be a perfect extension of what I’ve learned in my two preceding Udacity classes, cs101 and cs212. Look for my ‘udacious’ impressions in an earlier post.

Robotic driving necessarily involves the application of artificial intelligence insofar as the car has to make autonomous decisions continuously based on it’s perception of its environment using onboard sensors. It also has to effect correct guidance and control commands using its mechanical actuators. Much of this type of command and control is commonplace in other applications, but what makes this different is the need to inform those commands based on real-time application of probabilistic modeling and decision-making.

To digest, and check my understanding, I will reflect on it herein.

a Data Flow Diagram for a simple localization function

The first function is called localization; the robot has to accurately know where is compared to where it has been told it might be. Note that the image embedded above is a Data Flow Diagram in which I will capture my interpretation of this concept using a model notation familiar to the practice of functional analysis.

The map of where the robot might be has several components. The first is represented by a base probability that is equal to 1 divided by the number or possible locations. If however some of the locations have distinguishing features that can be sensed by the robot then another product can be introduced. Detection of a sensed value representative of the feature indicates a higher likelihood that the robot is next to that feature. However there may be more than one instance of that feature. As such a probabilistic signature has to be formed. The overall map is representable by the product of the feature map as multiplied by the base probability.

The robot sensing of a location feature can be simulated in code and then provides the basis for a probability equation that allows to robot to say to itself, if I sense this type of feature, when compared to the input location map, then I am at any given location with a particular probabilistic certainty.

The requisite math, and its implementation in python, were covered in the lecture. To me it was gratifying that what I have been learning in cs101 and cs212 enables techniques required to solve AI problems. Furthermore I welcome this class because, despite five semesters of math for my aerospace engineering degree, I was not deeply exposed to probability theory in that curriculum.

As an aside, and in the spirit of “synthesis,” which is part of the system engineer job description, I was struck by the similarity between localization and the “orientation” part of the OODA Loop. I have discussed the OODA Loop in a different context, for my own decision-making purposes, in an earlier post. So for different reasons I am exposed to similar concepts once again. I should not be surprised though. The OODA Loop represents a command and control mechanism that in fact includes a localization component. A component which is carried out real-time by a pilot for effective decision-making in combat. As such combat piloting and robotic driving are more similar than you might think at first glance.

I don’t know what’s next in the class, but if continues to be as interesting as localization was, I’ll be a happy student.