Search:

Home Page


News

Hypercube Open Source Boards

Hypercube Software

Hypercube Cloud & Data Center

Hypercube Training

Hypercube Geometry

Open Nano

Parametric Drones

Hyper Neural Net


Fiber Laser Metal Cutting Machines £34,999+

Hyper Car


PCB Design

3D Printed Parts Design

Virus Scan
Penetration Testing



Industry 4.0

Technological Singularity

Digital Quantum Computer

Embedded C Training

Sharjah Tour


Employment
Six Transistor Cache
Investors
Launch Date

Contacts


Twitter
Twitter



Terms and Conditions









Hypercube Cloud & Data Center


Hypercubes Cloud and Hypercube Data Center scalable to suit IoT projects to mega scale projects


Related


Hypercube Clouds and Data Center

  • Hypercubes good for small hobby projects to large supercomputing arrays for 3D wired Data Center. Below is a working ARM SoC supercomputer Hypercube which pushes Data Center technology to new heights.
  • The power supplies and 16Gbit back plane switch is built into the Hypercube
  • The above Hypercube is 44 core supercomputer array with SSDs running Linux with Apache web hosting and ftp upload for IoT cameras. Densely stacked 3D arrays of these Hypercubes is what makes a Hypercube Data Center (TM). The key difference between computers arranged in old fashioned flat data center approaches and Hypercube approach is that all the PCs in a Hypercube are wired in 3D allowing compact data centers to be built.
  • Hypercube Geometry page explains how the wiring is possible by partition 3D space into hardware spaced and non-intersecting conduit spaces.
  • Hypercube locks show how quick release clips help assemble giant structures in minutes.
  • This is the fastest scalable server room technology in the world merging IoT projects and micro Hypercube Data Center into one seamless technology inside research lab in one place. The servers are running Linux and Python scripts.
  • Its not always possible to trust Big Data to the Cloud - use a micro Hyper Data Center to keep data private whilst bringing the data processing in house to crunch the data locally.
  • For places like hospitals, care homes to modern Industry 4.0 factories, Hypercube is designed to integrate server technology and the logging devices such as the many IoT boards being developed into one unbeatable 3D wired system which has never been possible before, all connected together with symmetric gigabit fiber Internet and 5G technologies.
  • Typically the machines are stacked into a 3D array around scaffolding structures, and then the scaffolding is removed leaving behind a 3D packed Hypercube Data Center. The Data Center can be tightly packed into a 3D structure to occupy every part of a building or a machine or a structure, and then wired in 3D to make a network of machines like those found in Data Centers. The power supplies are in their own Hypercube boards, and they can come with distributed batteries for resilience against power outages. (See for example Hypercube Lithium Prototyping Board.)
  • Ray Kurzweil is an incredible genius who noted that human technology is evolving at an exponential path - most notably information technology. If this pace were to continue then by around 2035 technological evolution would become exponential and vertical. But there is currently a problem with the predictions. We are supposed to be in the era of 3D computer chips today as Moore's Law is tailing off and unable to keep up with demand.
  • Incredibly Hypercubes are positioned just at this precise moment in time to address the need for 3D wired computers. A 3D wired Hypercube Data Center can get over the main technical limitations of Moore's Law tailoring off by building 3D wired computers. .
  • The first picture is what Hypercube PCBs look like when assembled into project in minutes (see Hypercube Power Distribution Board). More variety of boards can be wired together in 3D to make complete systems.
  • Each Hypercube requires 12Gbits of network capacity (theoretical maximum). The combined data throughput from SSD storage devices easily capable of overwhelming 12 Gbits networking capacity so the data is buffered in RAM and contended to the network which implies the CPUs are rarely if ever 100% busy.

Intel Hypercube Supercomputer

  • The design is taking shape - a low energy (10W) board has been chosen.
  • Ubuntu 18.04 now boots on the low energy Intel board
  • Next steps are designing Hypercube PCB for dual redundant power, storage, and remote power cycle reboot through a RS485 network
  • This board digs into the conduit channel a little, but OK for this application.

Hypercube Micro Data Centers

  • With the increasing overheads of maintaining big machines and research and develop new machines such as ships, trains, planes, mining machines, Hyperloop pods etc, there is a need for those machines to have on board a Hypercube Micro Data Center.
  • The purpose of the Hypercube Micro Data Center is to process images, vibration sensor data, wind, thermal data and so on in one place in a consistent way that absorbs modern sensor arrays into a big machine to manage it and maintain it better and reduce operational costs with planned maintenance and sensor based maintenance.
  • Sensors such as vibration sensors, thermal sensors, noise sensors as well as battery packs etc require small embedded boards while data processing require big PC scale processors and storage. All these boards are mixed into compact Hypercube array in a seamless system arranged in a 3D wired computational structure that follows the contours of the small spaces available inside machines.
  • Micro Data Centers speed up AI fusion work many times over by placing data center and sensor systems together into small spaces available inside machines such as Hyperloop pods to self driving vehicles that are being developed. These micro data centers help towards making projects safer, and use AI to make faster decisions and better decisions. Why is this happening now? Because processing and storage has become cheap, the AI ecosystem is larger, and Hypercubes enable 3D wired computational structures to be made where previously it was all 19" rack solutions that are cumbersome. First, the data collection is addressed by embedded CPUs, then training is performed using on board AI using bigger processors, then inference is made based on new data gathered with sensors in situ to control the machines. Data is often located across many sources and can lag if its in the cloud, thus having the data on board speeds up local processing to know what is possible without projects being compromised by slow cloud lag. AI data volumes are huge and growing, and data velocity issues grind projects to a halt requiring micro data center solution that are adaptable to power these workloads that scale.

Cloud Latency Busting

  • Could Latency Busting (CLB) machines bring the power of AI right next to your physical presence without the latency. Data can become precious and cannot be downloaded or uploaded for security reasons, or in a timely way to the cloud. These kinds of restrictions kill projects in no time. The answer is to bring the cloud to your application and eliminate the latency. Sometime the task is to figure out an optimum position from current financial trend for example. Other times, its to find out where next to drill from the raw data. There is no time for data uploading and downloading. The opportunity is time sensitive. And you need CLB Hypercube there and then. Which we can do. A 1000 CPUs Hypercube array with each processing running at several GHz would take up about 4 desk spaces and consume about 5kw. Having your own CLB Hypercube Array defines who your are in an organization and how important your work is to your organization.

Progress

  • This is the relay board to turn on each individual CPU or arrays of CPUs in Hypercube format. It allows 12 systems to be turned on and off as needed. Typically in a Data Center unused resources are switched off to save power. Giant collections of computers are brought on line as needed by an independent network of embedded computer systems that monitor power, temperature, and software conditions of the CPUs. Hacked CPUs to crashed CPUs are rebooted independently of the server CPUs.
  • A variant of this board with length tuned tracks switches between SATA devices to allow a master to take control of slave disks, reformat and re-install operating system and reboot slave computers.
  • We need all this capability as Hypercube Data Center servers once installed into a 3D matrix are not expected to be recovered until their service life has expired. If an item goes faulty, it is simply switched off and left in the array until 18 months or so maximum has elapsed at which time new technology arrives with double capability and it becomes obligatory to replace.
  • Currently these ARM CPUs have been tested and found to be running for 6 months average without need for reboot or air conditioned rooms. This form of resilience testing is intentional as data centers that we make are not expected to ready made with air conditioned rooms that cost a lot in terms of energy bills and super sized AC units. We have servers that have been running for several years under these test conditions.
  • These secondary systems based on embedded CPUs are not usually hackable because they run from independent networks that should not be connected for remote access.
  • Every day tasks such as backing up are run through the normal computing facilities available through each server and the Linux operating system running within it.

Hypercube Embedded Supercomputer Arrays Are Faster Than Intel i9 CPUs Server Arrays for AI projects

  • With a number of caveats that apply :)
  • The problems with Intel (and ARM SoC) begin and end at DDR DRAM cycle times.
  • No matter how much the double data rate transfer speeds are, those rely on accessing memory sequentially from the current page in hundreds of picoseconds.
  • But page sizes are small - around 4K. So when its time to change page, the DDR DRAM must move to another page and this is where the problems hit like a ton of bricks.
  • It takes tens of nanoseconds. There are so many variants, but a safe round number to talk of is 50ns. But 50ns is 20MHz!!!!!!!!!!!! More info
  • So if a program has to continually jump around, the actual speed of the $1000 Intel i9 CPU drops down to 20MHz!!!!!!!!!!
  • This is true even if using DDR5!!!!!!!!
  • So what are you actually paying for when buying $1000 CPU?
  • You are paying for those block transfers in current page at gigabytes per second burst speeds.
  • But if your software is jumping all around the address space causing page change, which it can do with graphics software, games software and database query operations, then your speeds are 20MHz!!
  • So you mitigate graphics rendering with graphics accelerator for pixel rendering, improve games with offloading gaming algorithms into the graphics controller.
  • But what about database? Nope!! There are no such off the shelf things at the moment.
  • So you do a query, you are unlikely see improvement over a year 2000 PC. But a lot of that is down to mechanical head movement of hard disk.
  • So you improve on that by adding an SSD which will speed it up. But once again, take care, many of these SSDs can only do around 1000 operations per second (which is constantly improving).
  • So you finally got a i9 running like a 20MHz PC thats accessing disk 1000 times a second. What chance of trawling through terrabytes of data with that?
  • Absolutely none. The way its done is to partition the data and farm it out to a number of PCs and or reduce the amount of data that each query has to deal with.
  • With all that in mind, how do you build AI supercomputer?
  • To build the AI supercomputer you need Hypercube Embedded Supercomputer Array with all these boards with more to come :)
  • Instead of using Intel CPUs, it is far better to use 100MHz to 400MHz embedded CPUs with internal flash and RAM. These can clock memory at speeds of 50MHz which is noticeably faster than DDR speed limit of 20MHz!!!
  • But there is so little of it, that it cannot run big software packages. Typically you got 1 MB of memory instead of GBytes.
  • If they get any bigger with current technology, then likely they get slower.
  • But AI is all about coding ideas into numbers and making decisions. And distributing those decision making centers into an array of CPUs. Typically you can make around 100,000 decisions per second with each CPU allowing each CPU to execute around 1000 instructions per decision average (which includes access to 50MHz RAM, and register operations).
  • So if you could get 12 embedded CPUs making decisions, you are making around 1 million decisions per second with software with software that is jumping around all over the place in the decision making code.
  • The only things that can catch it is an FPGA. But FPGA are not running C like code, and so its not a generic problem solver as complexity grows.
  • Typically what the embedded supercomputer is trying to solve for real world autonomous systems that carry its own AI are
    • Inputs from sensors such as GPS, Lidar etc and looking up what behavior it should follow next
    • Converting sounds and vibrations to Fourier transform and running recognition software over it
    • Converting vision signals into recognition that can be meaningfully communicated to other systems such as actuators
  • At all times, we are limiting ourselves to reduce the need for FPGAs until they are critical and time saving.
  • Hypercube board with FPGA will be available to implement FPGA solutions for sub-systems.
  • Also Hypercube boards with low energy ARM SoCs will also be available. They are good for much more data intensive applications such as synthesizing speech.
    • The ARM SoCs also have the 20MHz limit due to the use of DDR, but their power consumption can be held below 5W for 1GHz operation so we can use many more of them in parallel for data processing. Higher speed operations such as controlling motors and fast decision making are implemented with embedded CPUs.
  • By combining Hypercube embedded controller boards, Hypercube FPGA boards and Hypercube ARM SoC boards into the Hypercube arrays, the best of all worlds are combined into a self contained system suitable for making complete AI machines that need to carry its AI on board such as robots, autonomous cars, autonomous delivery systems and so on.

Hypercube High Security Data Processing Environment

  • Hypercubes contain 12 CPU boards in a full cube, with each running Linux on an SSD.
  • Each Hypercube CPU board run its own MySQL database (and web server) by default.
  • If you are writing high security applications, tables are partitioned into different CPU boards to operate on data faster in parallel.
  • Partitioning data also enforces security boundaries for each table. For example "SELECT * FROM Employees" would be run by one CPU to retrieve personal data such name, address age and so on, an Employee Data ID for data stored on that machine, and this is separate from all other data about the employee which will be held in a different CPU board and disk referenced by Employee Data ID. If a hacker managed to log in and steal all the other tables and from all the other CPU boards, there is likely nothing there to identify a specific employee. Access to employee data would be implemented by a broker program that first logs who is asking for Employee data, and check the machine IP address, and some password, which makes it a lot more difficult to extract personal information directly out of the CPU containing employee personal data.
  • Partitioning the data is easy in a Hypercube Cloud because there are a lot more real CPUs available with physically separate disks to prevent hackers direct easy access to data without being logged.
  • Broker programs check important details such as which machines and IP addresses are authorized to access specific data, check the frequency of the data being withdrawn (to detect siphoning by insiders), check for employee records that are intentionally fake being accessed to spot hacker activity. All because Hypercubes have enormous numbers of low energy CPUs at its disposal to process big data.
  • Each Linux disk can also be encrypted such that if the physical disk was stolen, its of no use to anyone. (Encryption slows down disk operations, so it is used only on the most valuable data such as personal records.)

2018-09-25

  • 1 Gbit symmetric fiber Internet ordered
  • Soon we will have our Hypercube Servers running on it with Linux as operating system and numerous applications and projects.
  • All the material that is about to be open sourced will be served from Hypercube Servers with the anticipated TB per day downloads
  • All the training material will be available for students to learn embedded Linux, and other essential skills such as KiCAD, Python, C, SQL, PHP that relates to the work we do.

Related


Login
Page last modified on November 14, 2018, at 09:45 AM