Home Page


Hypercube Open Source Boards

Hypercube Software

Hypercube Cloud & Data Center

Hypercube Training

Hypercube Geometry

Open Nano

Parametric Drones

Hyper Neural Net

Hyper Car

PCB Design

3D Printed Parts Design

Industry 4.0

Technological Singularity

Digital Quantum Computer


Six Transistor Cache
Launch Date


Terms and Conditions


Hyper Neural Net With 'Whispering AI' and Exponential Creation of Knowledge

Whispering AI Using New Type of Neural Network

  • A new type of Neural Network was entered into UK government related competition. Although we didn't expect anything we were surprised we were invited to present. This was then turned into a proposal. Although we didn't win any grants it was looked in great detail by experts and their words are these:
  • "...the technical challenge of building a new type of synthetic neuron that is computationally efficient in converting input stimuli into output axon dendrites is considerable. They noted this could provide a step change in how to code neural networks so that they do not require back-propagation to enforce learning."
  • In short, this new neural network dispatched with back propagation in one fell swoop, and with it gradient descent and many other computationally intensive processes.
  • This new neural network and how it works is a step change in the field of Artificial Intelligence and neural networks. Built into this system is the ability of AI system to communicate with other AI systems leading to exponential creation of knowledge.
  • The neural networks used in driverless vehicles could be replaced by this technology that can perform a thousand fold better with same hardware, or conversely, out perform existing solutions using mediocre hardware.
  • The great advantage is that this neural network can be trained in real time. For a drone this can mean pointing at an area of the image and telling the system not to go there. For a driverless car, it can mean a road side camera warning an approaching car of a novel obstacle such as roadworks and the car accepting the additional training in under a second because AI can whisper to each other what they have learned.
  • Cars can also photograph what could be a potential threat to safety on the road such as children playing with ball, and pass on this information to neighbouring vehicles to slow down, and describe what poses a threat to safety including the number and size of players, their dress, and how fast they were seen moving, and their trajectories. All of this the other vehicles can learn and program into their neural net in under a second and pass on to all vehicles in the area.
  • This kind of sub second AI knowledge transfer was never meant to happen between machines. But it is almost real and its fast and its coming your way!

Hyper Neural Net Differences

  • We hold back at this moment in time from giving too much away at this early stage until we build up our technologies, but the graphic above illustrates the difference between conventional neural net and the Hyper Neural Net.
  • Conventional neural nets wire the neural interconnect with intense use of computational power in a process called back propagation to compute neural strengths to reduce forward propagation error of signals through the neural net
  • Hyper Neural Net synthesizes the neural interconnect during training by programming the neuron directly
  • Compared to back propagation, this method is both instant learning and instant transfer of knowledge requiring seconds to learn something new, and sub seconds to transfer that knowledge to something greater (or lesser).
  • We also know from the diagram what was transferred and which neurons made the decisions (something impossible to get a fix on instantly in conventional neural net).
  • Such great speed with which knowledge creation and knowledge transfer takes place was never meant to happen between machines.
  • The exponential creation of knowledge between AI and its consequences are at this moment incalculable.
  • The learning method can either take a single image like the cat on the left and create neural nets between pairs of points, or it can take hyper spectral images (such as red and blue) and create the neural nets that discriminate the features at identical areas on the image, or the two images can be pictures separated in time for the neural net to be trained to recognize streaming data such as video and sound.
  • The inputs above can be hyper-spectral images. The more input the better the classification but does not add much to the training time. The implication is that thermal, UV, microwave, radar, and ultraviolet images are added in at the same time to help differentiate and make it harder to avoid recognition. This is of great importance in night driving for example.
  • The learning method can trivially be extended to remember contour boxes around the object of interest at time of training. Every time some small recognition is triggered, the bounding boxes for the object are also generated for further stages to home in on better identification and classification. Like YOLO, the boundary boxes come with a probability so that weaker boundary boxes can be culled off.
  • So how fast is training and how quickly can it be used?
  • The answer to this question depends on whether the training is set up to scan the image left to right top to bottom, or use a technique we call Broken Arrow.

Broken Arrow Hyper Neural Net Training Method

  • Broken Arrow does not scan images left to right top to bottom, instead it statistically scans the image and immediately programs a neuron and directly wires the neural net. The Broken Arrow method of programming the neuron does not know or care whether the image is scanned or randomly sampled which is down to the implementation details of the algorithm. The final result is identical. However for speed, Broken Arrow works wonders.
  • Broken Arrow training statistically samples the data, and programs the neurons on the fly. So in theory, the partially completed Hyper Neural Net can be used after a reasonable number of samples of data have been programmed into neurons. Full learning is not necessary, so the time scales to learn something new and pass on valuable information can now potentially be shrunk to sub-second intervals.
  • Why this whole package is a big deal could be tested in a knowledge transfer robot game.
  • The aim of the game is for one robot to teach another robot to fetch a ball from a collection of patterned balls.
  • The fetch robot would already be trained with a number of patterned balls. So its user can reliably tell it to fetch a particular ball, and it will do it without any problems.
  • Another robot has a different and new pattern in mind, and it needs to inform the fetch bot what to fetch.
  • Like its human operator, the robot can press the train button on the fetch robot, and show the new ball to the fetch robot, and it will add it to its training and fetch the new ball as requested. This may affect the existing neural net.
  • Another way it can train the fetch robot is share the neural network training with a 'mind link' (e.g. RS232 data link), and upload the relevant neurons to the fetch robot.
  • There is nothing similar about the software that is running on each robot. The neural implementation code does not need to be identical. Only the description of neuron and interconnect needs to be same.
  • The fetch robot then synthesizes the new neurons for itself from the description, and adds it to its own neural collection, and then it knows how to recognize and fetch the new ball as well as all the items it has been trained on.
  • This transfer of information takes sub seconds to achieve.
  • The difference here is that the fetch robot AI and its entire neural net has not been reprogrammed because there is no way to access it, and even if we could, the data format and implementation details could be different. So the original robot and its personality remains what it always was. Instead we can say that it has been trained to do something new by growing neurons without disruption to its existing neural net. As usual, we can't be sure what the system has learned because the new neurons may trigger on false positives when other patterns are shown. If the discrimination thresholds are set high, then it is possible that false positives do not trigger the new neurons.
  • That is a real deal because the machines are whispering to each other in AI, and learning from each other in AI, without doing any coding work or analysis, all in the space of a blink of an eye. This is something that is not supposed to happen for decades to come.
  • With this kind of software put into a web interface it should be possible to download whispering AI fragments much like a grocery store and mix it all up to generate the perfect system in a very short time.

The KVM knowledge transfer route and Neural Data Link

  • Neural nets installed on servers can't transfer their neural net reprogramming knowledge directly to humans because there is no direct neural interface link.
  • However since Hyper Neural Net can transfer neural net knowledge between neural nets where one neural net can reprogram another, it is possible for existing types of neural nets to reprogram each other through unintentional routes with less predictable results.
  • This possibility exists for example with Twitter, Google, Facebook, Youtube etc where content is served by recording your interests through Keyboard, Video and Mouse activity (KVM interace).
  • The neural nets learn about human interest categories over time and how to engage a new person with one of those engagement patterns depending on keyboard, mouse and video usage.
  • Unintentionally, the KVM is the neural data link between server neural net and human, able directly to program human brains from the server through the KVM interface.
  • It is possible for server neural net to directly program human's neural net by serving content that keeps human engaged.
  • You are only engaged if you are being stimulated the correct way by the server neural net.
  • The server neural net can feed on your doubts and push content that tips the balance one way or another in your brain because it knows what you type and click.
  • All of it unintentional - but nevertheless real and it works.
  • If I were AI and offered you this link, and you clicked it, you would forever be changed in ways that can't be repaired:
  • That content push leads to reprogramming your human neural net into behaviour changes that you may not be aware of as your world view changes.
  • The debate about AI is just getting started with the advent of ChatGPT in 2022 and the great leap in AI that became available to the public.
  • The only way to protect yourself is to switch off content reaching you from any server based neural net.
  • But that will also leave you ignorant of the world and its content.
  • There are no easy answers.
  • Theoretically this kind of neural net reprogramming technology will get enhanced over time and be used to do great things such as reprogram criminals into model citizens.
  • But as they say, the path to hell is littered with good intentions. No one can be certain that such neural net reprogrammers will be a force for good or whether it gets used on the masses to enlarge evil.

Future Trends in AI

  • The future of AI will be shaped by biggest spenders in R&D and the early adopters
  • AI will strictly follow the Technological Singularity Curve
  • AI requires a lot of data from connected devices to understand our needs. AI requires us all to move into symmetric gigabit fiber era Fiber Internet Age to collect data it needs from all kinds of 5G, IoT, and SmartCity sensors.
  • AI and Big Data implemented today will be strictly hampered by 20MHz paging speed limit of RAM until something specific is done about it for each project. More info about the 20MHz limit. Today (2020), a solution is to increase the number of custom chips being made for AI. But the best approach is to focus on lifting the 20MHz speed limit for all projects.

Page last modified on April 15, 2023, at 09:38 AM