Search:

Home Page


News

Hypercube Open Source Boards

Hypercube Software

Hypercube Cloud & Data Center

Hypercube Training

Hypercube Geometry

Open Nano

Parametric Drones

Hyper Neural Net


Fiber Laser Metal Cutting Machines £34,999+

Hyper Car


PCB Design

3D Printed Parts Design

Virus Scan
Penetration Testing



Industry 4.0

Technological Singularity

Digital Quantum Computer

Embedded C Training

Sharjah Tour


Employment
Six Transistor Cache
Investors
Launch Date

Contacts


Twitter
Twitter



Terms and Conditions









Hyper Neural Net

AI Boards Using New Type of Neural Network

  • A new type of Neural Network was entered into UK government related competition. Although we didn't expect anything we were surprised we were invited to present. This was then turned into a proposal. Although we didn't win any grants it was looked in great detail by experts and their words are these:
  • "...the technical challenge of building a new type of synthetic neuron that is computationally efficient in converting input stimuli into output axon dendrites is considerable. They noted this could provide a step change in how to code neural networks so that they do not require back-propagation to enforce learning."
  • In short, this new neural network dispatched with back propagation in one fell swoop, and with it gradient descent and many other computationally intensive processes.
  • This new neural network and how it works is a step change in the field of Artificial Intelligence and neural networks, and with it we intend to conquer all by creating developer boards, services and ready made neural nets with APIs. We are going to call it The Hypercube Neural Network or Hyper Neural Net for short because we are going to build the entire hardware in Hypercube format which is extensible into all manner of electronics and computing projects.
  • The current AI has 50 year head start on this technology so it will not be easy to catch up and overtake existing R&D for a number of years in terms of deployment.
  • A lot of the neural networks used in driverless vehicles could be replaced by our technology that can perform a thousand fold better with same hardware, or conversely, out perform existing solutions using mediocre hardware.
  • The proof is in the making of real hardware that wins competitions, so we start immediately to create these new range of AI boards and neural network Hypercube boards with a view to releasing them as soon as we can.
  • And we will be sure to be testing them with AI Synthetic Hand Project to AI infused drones and robots and report on it to show how much difference these new ideas make.
  • The first challenge we will undertake is to take a simple USB camera with a simple Hypercube version of Allwinner R40 ARM SoC board of our own, and then see if the device can be programmed up with the algorithm to drive a toy car or a drone around obstacles in real time. We then change the obstacles around in real time and with a second camera try to teach the drones and cars what to do to survive.
  • The great advantage is that this neural network can be trained in real time. For a drone this can mean pointing at an area of the image and telling the system not to go there. For a driverless car, it can mean a road side camera warning an approaching car of a novel obstacle such as roadworks and the car accepting the training in under a second.
  • Cars can also image what is a potential threat to safety on the road such as children playing with ball, and pass on this information to neighboring vehicles to slow down, and describe what poses a threat to safety including the number and size of players, their dress, and how fast they were seen moving, and their trajectories. All of this the other vehicles can learn and program into their neural net in under a second and pass on to all vehicles in the area.
  • This kind of sub second AI knowledge transfer was never meant to happen between machines. But its now real and its fast and its coming your way!

Hyper Neural Net Differences

  • We hold back at this moment in time from giving too much away at this early stage until we build up our technologies, but the graphic above illustrates the difference between conventional neural net and the Hyper Neural Net.
  • Conventional neural nets wire the neural interconnect with intense use of computational power in a process called back propagation to compute neural strengths to reduce forward propagation error of signals through the neural net
  • Hyper Neural Net synthesizes the neural interconnect during training by programming the neuron directly
  • Compared to back propagation, this method is both instant learning and instant transfer of knowledge requiring seconds to learn something new, and sub seconds to transfer that knowledge to something greater (or lesser).
  • We also know from the diagram what was transferred and which neurons made the decisions (something impossible to get a fix on instantly in conventional neural net).
  • Such great speed with which knowledge creation and knowledge transfer takes place was never meant to happen between machines.
  • The consequences are at this moment incalculable.
  • The learning method can either take a single image like the cat on the left and create neural nets between pairs of points, or it can take hyper spectral images (such as red and blue) and create the neural nets that discriminate the features at identical areas on the image, or the two images can be pictures separated in time for the neural net to be trained to recognize streaming data such as video and sound.
  • The inputs above can be hyper-spectral images. The more input the better the classification but does not add much to the training time. The implication is that thermal, UV, microwave, radar, and ultraviolet images are added in at the same time to help differentiate and make it harder to avoid recognition. This is of great importance in night driving for example.
  • The learning method can trivially be extended to remember contour boxes around the object of interest at time of training. Every time some small recognition is triggered, the bounding boxes for the object are also generated for further stages to home in on better identification and classification. Like YOLO, the boundary boxes come with a probability so that weaker boundary boxes can be culled off.
  • So how fast is training and how quickly can it be used?
  • The answer to this question depends on whether the training is set up to scan the image left to right top to bottom, or use a technique we call Broken Arrow (p.s. no Arrows were broken or harmed in any way despite the name - we were just beside ourselves when the name was suggested over a popcorn movie).

Broken Arrow Hyper Neural Net Training Method

  • Broken Arrow does not scan images left to right top to bottom, instead it statistically scans the image and immediately programs up the neuron and directly wires the neural net. The Broken Arrow method of programming the neuron does not know or care whether the image is scanned or randomly sampled which is down to the implementation details of the algorithm. The final result is identical. However for speed, Broken Arrow works wonders.
  • Broken Arrow training statistically samples the data, and programs the neurons on the fly. So in theory, the partially completed Hyper Neural Net can be used after a reasonable number of samples of data have been programmed into neurons. Full learning is not necessary, so the time scales to learn something new and pass on valuable information can now potentially be shrunk to sub-second intervals.
  • Why this whole package is a big deal could be tested in a knowledge transfer robot game.
  • The aim of the game is for one robot to teach another robot to fetch a ball from a collection of patterned balls.
  • The fetch robot would already be trained with a number of patterned balls. So its user can reliably tell it to fetch a particular ball, and it will do it without any problems.
  • Another robot has a different and new pattern in mind, and it needs to inform the fetch bot what to fetch.
  • Like its human operator, the robot can press the train button on the fetch robot, and show the new ball to the fetch robot, and it will add it to its training and fetch the new ball as requested. This may affect the existing neural net.
  • Another way it can train the fetch robot is share the neural network training with a 'mind link' (e.g. RS232 data link), and upload the relevant neurons to the fetch robot.
  • There is nothing similar about the software that is running on each robot. The neural implementation code does not need to be identical. Only the description of neuron and interconnect needs to be same.
  • The fetch robot then synthesizes the new neurons for itself from the description, and adds it to its own neural collection, and then it knows how to recognize and fetch the new ball as well as all the items it has been trained on.
  • This transfer of information takes sub seconds to achieve.
  • The difference here is that the fetch robot AI and its entire neural net has not been reprogrammed because there is no way to access it, and even if we could, the data format and implementation details could be different. So the original robot and its personality remains what it always was. Instead we can say that it has been trained to do something new by growing neurons without disruption to its existing neural net. As usual, we can't be sure what the system has learned because the new neurons may trigger on false positives when other patterns are shown. If the discrimination thresholds are set high, then it is possible that false positives do not trigger the new neurons.
  • That is a big deal because the machines are talking to each other in AI, and learning from each other in AI, without doing any coding work or analysis, all in the space of a blink of an eye. This is something that is not supposed to happen for decades to come.
  • With this kind of software put into a web interface it should be possible to download AI fragments much like a grocery store and mix it all up to generate the perfect system in a very short time. Our competitors cannot do that at this moment in time.
  • We just want to hurry now and get prototypes built and into the hands of developers to try :)
  • 2018-08-03 This board is now manufactured and has large enough CPU to write algorithms for writing embedded AI and check the theory if it can pass on knowledge between different machines :)

AI and Emergent Behavior Such as Paranoia

  • One side effect of AI that transfers knowledge instantly could be emergent paranoia.
  • In a combat situation, AI can relay their demise experiences to each other before they fully succumb and this in turn teach other robots and their AI infused systems to survive. Good idea you might argue, because another robot is not going to repeat the same mistakes. But all this information can however lead to paranoid AI behavior. Any sound, any vibration, any unusual thermal signature in an area where several robots fell would force the AI to assume the worst, that it itself may not survive the next encounter, and so it could opt to charge in with its guns blazing having learned from previous robots that did the same variation which helped prolonged survival. In the process, the emergent behavior is created by AI that is fully compatible with symptoms of paranoia. The questions racing through the paranoid AI system is same as a human paranoid. Are there others looking at me? Is that flash of light in the darkness more than just a firefly? Was that noise behind me my own footsteps or someone following me? The third robot was felled shortly after detecting similar foot steps signal - so may be I should shoot first and ask questions later! :(
  • The only way to create safety is to create even more benevolent AI and fuse it with sensors that overcomes the loss of precision about information gathering that leads to mistakes and dangerous emergent behavior.

Future AI

  • We show how a Yottabyte Hypercube Data Center is enough to store an entire brain after it has been digitized by slicing and recording every neural connection and simulating its function. And then we show that simulating neurons, the artificial intelligence neural networks can write software from descriptions of requirements. It is a very big thing to take away for future AI systems because it shows a route for AI to understand specifications and from those write software.
  • When we release our AI hardware, we will add features to allow abstract typed input and convert it into exact software because that is one very big future that AI can look forward to realizing, because we know it is possible.

The Future of AI merged with Parametrics

  • AI will be connected to all manufacturing machines and the manufacturing data they will receive from AI machines.
  • A large part of it will be driven by parametric software.
  • If it was a construction company, then each model of building is programmed into the AI system and it can vary the dimensions according inputs such final dimensions, the aesthetic appearance of buildings, how many families and office functions it can carry and so on.
  • The the dimensions of each component created by parametric software without intervention of human input.
  • The AI will learn from other systems what it is that is required that minimizes the error output from all the inputs such that it designs the perfect buildings.

Plan

We think there is a clean plan path forward for this technology.

  1. Use the Hypercube Linux server board to take images and store them into SSD (this software is already built)
  2. Install Hyper Neural Net software written in C into Linux server board (needs to be built)
  3. Build training software in Python and PHP with web based user interface to allow users to train the Hyper Neural Net (the system is already half built)
  4. Use Hyper Neural net to look for small specific number of items in the images and trigger IO lines or send messages down serial RS232, at low data rate.
  5. Use Arduino IDE to program Arduino clone board to take action such as opening doors, and triggering further actions for other Arduino IDE or bigger Linux boards. (We will provide example software - the user customizes to his or her needs.)
  6. Increase the numbers of Hypercube Linux server boards to increase the numbers of images gathered and/or AI functions added in parallel by fitting more boards to the Hypercube Array. Train the boards to recognize various items such as soldering iron, screw driver, nuts and bolts etc.
  7. Modularize the Neural Net training so that it can be shared with other boards without having to retrain each new board. The idea is lets say to train one board to watch the front door and allow your pet dog (and only your dog) in and out but not the cat. The second board in the garden also has video camera and lets the cat and dog in and out (and only your cat and dog, whilst raising alarm if it is anything else). The training that is modularized for recognizing the dog is shared into the second board without having to retrain the neural net saving lots of time. We will make the neural net modules transferable by Ethernet allowing it to be uploaded to servers and downloaded as needed. The servers can be local in the same house or external and the real Internet. So potentially everyone share their neural net training modules with everyone else. If someone trained a neural net to recognize chair legs, then download it and add it your collection so that lets say you build a robot vacuum cleaner, it will then have some ideas on what chair legs look like without having to specifically train for it :)
  8. The focus of this kind of module is to develop Edge devices - devices that do the AI without needing to connect to central servers to do the AI work.

Future Trends in AI

  • AI requires a lot of data from connected devices to understand our needs. AI requires us to move into symmetric gigabit Fiber Internet Age to get the data it needs.
  • AI and Big Data will be strictly hampered by 20MHz paging speed limit of RAM until something specific is done about it for each project. More info about the 20MHz limit. Today (2018), a solution is to increase the number of custom chips being made for AI. But the best approach is to focus on lifting the 20MHz speed limit for all projects.

Hypercube 3D Thinking Machine

  • Hypercubes are 3D wired machines. It is relatively easy to build a 3D wired machine simulating the actions of various areas of the brain being lit up with activity, interconnecting with other areas and lighting those up. Although billions of neurons are involved, we can today scan for bulk activity in real time and transfer that to the Hypercube 3D Thinking Machine. We can replay those bulk activities repeatedly, and correlate with what a person was thinking at the time the activity was recorded. If we do that with many samples from the population, then what emerges from that is a 3D activation patterns that describe what an average person was thinking. This raw data becomes easy for Hypercube 3D Thinking Machine to communicate with other Hypercube 3D Thinking Machines and collectively process our many thoughts and summarize our feelings.
  • If can provide valuable feedback to those in charge of society to think through better ideas because they will get a more accurate picture of how well these ideas were received.




Login
Page last modified on October 03, 2018, at 03:20 PM