Archive for the ‘Computer Science’ Category

Micro Data Centers – Independence Day for AOL?

Saturday, July 7th, 2012

Even as I learn to become a modern technologist, the field of computer science is changing constantly. I don’t even know what a modern technologist is, but it is a good enough word because in some ways I am already an archaic technologist (goodbye aerospace!). Since I want to keep eating and stay comfortably sheltered and clothed until I die I have to modernize. To me that means skill acquisition in computer science; skills which I will hybridize with systems engineering (the still useful, repurpose-able part of what I used to do). Still, things change fast.

No sooner than I got comfortable with the idea of the cloud as a new paradigm enabled by vast power-hungry data centers, did I just learn today that AOL is turning that concept on its head as well.

As described in his own blog post AOL’s Michael Manos relates that when he got to the company he initiated a deep review of all aspect of operations. I found his comments on what that review entailed to be very interesting in and of themselves. Basically the review sounded to me like an old-shcool systems engineering analysis of their operation. The result was a Technology Roadmap that contained three components. The first component dealt with internal efficiencies, the second, technical challenges, and the third was an aggressive wish list of game changing technical goals. That third group was referred to as “‘Nibiru’ after a mythical planet that s said to cross our solar system that wreaks havoc and brings about great change.”

Their primary Nibiru goal was to develop a data center environment that did not need a building. Their driving requirements for this was minimal physical touch. This would give them great flexibility in how they will deliver ther products and services. Their result was the Micro Data Center. Attributes of this product include:

  • new technology suite
  • deploy-ability to “anywhere in the world with minimal to no staffing”
  • extremely dense compute capacity (for longest possible use once deployed)
  • deploy-ability anywhere, regardless of temperature and humidity conditions
  • ability to support/maintain/administer, remotely
  • fits within power envelope of any ‘normal building’
  • interoperability within the AOL cloud environment and capabilities

AOL claims to have accomplished all of this and declared Independence Day on July 4th 2012, having successfully tested this in the field near Dulles airport in Virginia.

Bottom line for AOL and why this is such a game changer for them is that they can “have an incredible geo-distributed capacity at very low cost point in terms of upfront capital and ongoing operational expense.”

Manos’s post contains much more information about advantages and future implications for this breakthrough. As for me it is interesting to watch changes develop in the field even as I am learning it at a fast rate myself.

Independence Day, indeed; for both AOL and me!

not a micro data center

Not a Micro Data Center – just old Towers in my Garage!

 

Best Robotic Legs Ever?

Friday, July 6th, 2012

A very interesting development has been reported by the Daily Disruption News Desk regarding robotic legs that are claimed to “fully model walking in a biologically accurate manner.” This will come as good news for spinal cord injury patients. Those of us who follow developments in artificial intelligence and robotics will likely take note as well.

I read this account with fascination, and immediately wanted to sketch out my understanding in model form. Extending the colloquialism, to a hammer, everything is a nail – to a systems engineer, everything must be modeled. As conveyed in the article, human walking is controlled by a neural network called the central pattern generator (CPG), which is anatomically located in the lumbar region. It’s purpose is to generate rhythmic muscle signals. The researchers said in its simplest form the CPG can be modeled by a neuron pair that each fire signals in alternating fashion.

To complete this model in addition to the neural architecture, the robot needs muscle-skeleton and sensory feedback components. Roughly, this system can be modeled as shown:

I could be wrong, but this is how I understand the Robotic Leg System!

Co-author of the study Dr Theresa Klein was quoted as saying “…we were able to produce a walking gait, without balance, which mimicked human walking with only a simple half-centre controlling the hips and a set of reflex responses controlling the lower limb.”

So, did you catch that? That was quite a surprising statement. Two things are totally counter intuitive to me. First, she said the robot works “without balance.” Does that mean that this robot does not need an “inner ear” to balance? Second, the CPG apparently apparently converts coarse motor commands into forces applied at the hip joint only. The “dangling part of the leg, the lower limb, just follows reflexively, implementing easily programmable commands that simple follow what is happening up stream at the hip.

Another implication of this analysis is that the brain proper plays less of a role in controlling gait that I would have guessed.

This would be a good time to confess that I could be totally wrong in my interpretation of this research and its result; I am learning as I go.

Speaking of which, the CPG model of this study is apparently a good facsimile of how gait is refined from early childhood steps through later improvement during the maturing process. The CPG in humans gets better over time as it learns the best walk to walk by repetition.

This is exciting as I can see similarities between this system and what I am learning in my Udacity.com artificial intelligence class. The evolving understanding of complex bio-mechanical systems as well as advances in AI make this a great time to be a student of such things.

Top Down Robot Car!

Friday, June 29th, 2012


So with a title like Top Down Robot Car I bet you think I’m talking about a convertible version of a robot car. Sorry, not this time, although the reason a robot might want to implement the feature of ‘roof retract-ability’  could be an intriguing question – improved chances of picking up robot-girls perhaps?

In this case I’ll be talking about the design process for specifying and modelling a robot car using a top-down approach.

Disclaimer: I have never built a robot car before, in fact, I am just learning about artificial intelligence in my current Udacity class (cs373) which uses a robot car as the sample problem. I am however a systems engineer with experience in the aerospace industry. As such I thought it might be beneficial to compare and contrast a method familiar to me (top down) with the approach being used in the class (bottom up).

In my previous post, I essentially captured a bit of the bottom up approach being used to teach the class. One of the lowest level questions possible, was the first one asked; from the robot perspective – where am I? That question, and its answer in implementation, embodies the function of ‘localization’. The Data Flow Diagram in that post described the input-process-output for localization in functional analysis notation.

Here I will sketch out a top down design wherein localization but one of a number of lower level functions. Note that part of this approach is similar to the problem solving method taught by Peter Norvig in Udacity, cs212, the Design of Computer Programs.

The top level requirements of a car might be – provide point to point transportation over a road network while observing driving rules and avoiding collisions with other objects in the environment.

A simple concept inventory then might be:

  • transport (the car, or platform)
    • location
    • speed
  • environment (the network of roads, traffic signals, etc.)
  • points (different locations along the path of the car, starting with the origin, ending with the destination)
  • sensors (detect environment, state, and objects)
  • guidance (what is the next point along the path?)
  • control (what speed and steering commands will get to the car to the next point on its path?)
  • rules (constraints on how car moves in environment)
  • objects (anything that enters the car’s area of concern)

Upon translation of these concepts to a data flow diagram format several important distinctions have been made as described below.

a simple data flow diagram

a rough draft, likely to need revisiting as class proceeds

Things external to the system are abstracted as rectangular shapes. Functions internal to the system are enclosed in ovals, and information that must be shared is represented by a data store symbol, two parallel lines. Data flows are represented by directional arrows.

One other important note about abstraction. The class is about artificial intelligence as applied to cars; it is not about cars themselves – cars are mundane. As such the car is simplified into the ‘transport’ function. At the end of the class we’d just buy a car and bolt on all the required sensors, and associated AI componentry, as an appliance. Important functions like sense, guide, control are isolated and shown to interact with the car (or ‘platform’) via data flow. Understanding of these is germaine to the class and warrant standing alone for the purposes of functional analysis.

This crude model will have to be revisited as we go, but for now it suffices as a good starting point, and provides a reasonable introduction to top down design principles.

Will Robots Rule the Road?

Thursday, June 28th, 2012

Robot cars are a reality, having already proven themselves in desert and urban road settings, accumulating thousands of miles of travel in the process. Sebastian Thrun was the leader of a Stanford University team whose entrant, Stanley, won the DARPA Grand Challenge in 2005. Sebastian is also the co-founder of the online university Udacity.com. Fortunately I will benefit from both of these facts because, as of yesterday I enrolled in Udacity’s class, cs373, Artificial Intelligence (Programming a Robotic Car), as taught by Sebastian himself. That’s kind of like being asked to land on the moon with Neil Armstrong as pilot.

Only one lecture in, but I can tell already it is going to be a great course. Taught using python, it seems to be a perfect extension of what I’ve learned in my two preceding Udacity classes, cs101 and cs212. Look for my ‘udacious’ impressions in an earlier post.

Robotic driving necessarily involves the application of artificial intelligence insofar as the car has to make autonomous decisions continuously based on it’s perception of its environment using onboard sensors. It also has to effect correct guidance and control commands using its mechanical actuators. Much of this type of command and control is commonplace in other applications, but what makes this different is the need to inform those commands based on real-time application of probabilistic modeling and decision-making.

To digest, and check my understanding, I will reflect on it herein.

a Data Flow Diagram for a simple localization function

The first function is called localization; the robot has to accurately know where is compared to where it has been told it might be. Note that the image embedded above is a Data Flow Diagram in which I will capture my interpretation of this concept using a model notation familiar to the practice of functional analysis.

The map of where the robot might be has several components. The first is represented by a base probability that is equal to 1 divided by the number or possible locations. If however some of the locations have distinguishing features that can be sensed by the robot then another product can be introduced. Detection of a sensed value representative of the feature indicates a higher likelihood that the robot is next to that feature. However there may be more than one instance of that feature. As such a probabilistic signature has to be formed. The overall map is representable by the product of the feature map as multiplied by the base probability.

The robot sensing of a location feature can be simulated in code and then provides the basis for a probability equation that allows to robot to say to itself, if I sense this type of feature, when compared to the input location map, then I am at any given location with a particular probabilistic certainty.

The requisite math, and its implementation in python, were covered in the lecture. To me it was gratifying that what I have been learning in cs101 and cs212 enables techniques required to solve AI problems. Furthermore I welcome this class because, despite five semesters of math for my aerospace engineering degree, I was not deeply exposed to probability theory in that curriculum.

As an aside, and in the spirit of “synthesis,” which is part of the system engineer job description, I was struck by the similarity between localization and the “orientation” part of the OODA Loop. I have discussed the OODA Loop in a different context, for my own decision-making purposes, in an earlier post. So for different reasons I am exposed to similar concepts once again. I should not be surprised though. The OODA Loop represents a command and control mechanism that in fact includes a localization component. A component which is carried out real-time by a pilot for effective decision-making in combat. As such combat piloting and robotic driving are more similar than you might think at first glance.

I don’t know what’s next in the class, but if continues to be as interesting as localization was, I’ll be a happy student.

Functional Programming Defined by Consensus

Tuesday, June 26th, 2012


Learning computer science is part of what I am up to currently. Through two classes at Udacity.com I’ve been introduced to basic programming in cs101, and then to functional programming in cs212. Both used the python which is known for its power, ease of learning, and suitability for application to a broad spectrum of problems.

The first course introduced basic concepts such as creating, assigning and storing data in memory, defining functions as blocks of code, program flow control structures, argument passing and data return, program execution and data output.

Building on that foundation, the second courses introduced additional powerful capabilities. List comprehensions, regular expressions, generator expressions, search algorithms and recursion were covered, all at breakneck speed. Fundamental to this second class though was a concept that challenged the understanding and assumptions of the students new to computer science; that was the power of the function within functional programming. The essential distinction is that, beyond merely defining a function, it thereafter can be treated as any other object that the language accommodates. This is such a powerful idea, and its implications so far reaching that there is a pedagogical imperative by which students are are first intentionally shielded from this knowledge. Computer Science professors slowly feed their students regurgitated worms until the time that their little birds are ready to be pushed from the nest; until such time in the course that students are properly prepared to absorb the full impact, utility, and implications of functional programming. Lovely image, won’t you agree? I just threw that in to see if anyone was paying attention.

For me, the concept was rather elusive at first, and still remains somewhat mysterious. So much so that it bears revisiting. So now, before my next round of online classes start, I will as an exercise attempt to define functional programming by digesting an two existing StackOverflow threads (first,second). My theory is that if I shine a light on a subject from many different directions a better understanding will emerge. This will be my first, but probably not my last attempt to reinforce my conceptual understanding of functional programming.

As gleaned from the referenced threads the following is one composite definition of functional programming, or “FP” hereafter. FP is well suited for a wide variety of problems but especially so when mathematics are involved. Because of the mathematical emphasis FP is more flexible in terms of abstraction and composition. Here abstraction means the ability to simplify problems by using FP constructs to model the problem. Composition refers to the ability to extend the power of functions by combining, nesting and extending them as permitted by the FP language of choice.

A key distinction, referring to that first mentioned above in my introduction is that FP promotes functions to first class variables. Here is where, if the professor said that on the first day of class, I would have likely said, “hey, I want to go back to my regurgitated worms.” Sorry, I could not resist.

One feature of an FP approach is to limit the use of global variables, or variables that are visible to the full scope of the program. This facilitates the ability to employ parallel execution. Typical usage of this would include graphic modeling and ray-tracing. The latter figures prominently in computer animation.

FP also implements powerful data structure capabilities. A benefit of this allows for more efficient algorithms and code than non-FP approaches. Similarly, with list comprehensions, such as provided in python, FP promotes brevity by allowing the collapse of very complex loop mechanisms into a single line list comprehension.

Generator expression in python also promote efficient memory utilization in that elements in a large set of data can be referenced by their place in the set, as if generated on demand, rather than consuming a large array as with in imperative programming.

Finally FP encourages modularity as a result of a more decouple approach than imperative programming; this enables reusability of code and improves testability as well.

This concludes my experiment in FP concept reinforcement. I have benefited from the exercise, and I hope you were not overly traumatized by the worm reference. I promise to try not to use such a revolting literary device ever again.