Search:

Home Page


News

Hypercube Open Source Boards

Hypercube Software

Hypercube Cloud & Data Center

Hypercube Training

Hypercube Geometry

Open Nano

Parametric Drones

Hyper Neural Net




Hyper Car


PCB Design

3D Printed Parts Design



Industry 4.0

Technological Singularity

Digital Quantum Computer

RISC-V


Six Transistor Cache
Investors
Launch Date

Contacts






Terms and Conditions

_________________________








T6 Cache


T6 Cache is a 6 Transistor Per Bit Cache For Micro-controllers and Microprocessors

  • We aspire to be fabless chip production company after developing a range of boards, a new programming language and infrastructure for 3D wired computers using Hypercube Geometry.
  • One of the products we hope to make with the new programming language is a computing chip that executes the commands of the new programming language.
  • Over the years we have also discovered a new way to make a cache for this CPU. The average transistor count of the new cache is slightly under six transistors per bit.
  • This cache was submitted to a UK government competition and though it did not win any prizes, their feedback is as follows:
"Assessors recognised that this innovation would provide a means to implement chip-level electronic caching of data using a smaller number of transistors, allowing larger amounts of data to be cached effectively while consuming less power."
  • In short our cache alters the economic balance of power in chip design.
  • Even simple micro-controllers can now employ a cache by trading RAM size for cache because transistor count is similar for cache and RAM.
  • The fastest commercial chips on the market are Intel CPUs with cache ranging in size to about 1MB and above. This type of cache comes at a premium in price because of the silicon area consumed and huge power usage due to huge numbers of transistors used per bit of cache.
  • Compare that to using just under 6 transistors per bit of cache. It is used to trade off static RAM (6 transistors per cell) for cache and win in any race to beat any old Intel chip for a fraction of the power usage, silicon area usage, whilst doubling and quadrupling the cache size at the same time.
  • Another way to exploit the new cache is to run the CPU at high speed and multi-task between different memory buses. DDR memory can send/receive data fast (GHz speeds) whilst in the same page, but if the page is changing all the time, then the data throughput rate goes down to around 20MHz per word. So if a chip has 4 memory buses, there is ample time with say a 1 GHz CPU to service all 4 memory buses because the internal cache is running at static RAM speeds (minimum 100MHz in a modern CPU). So long as the static RAM cache is at least 4x faster than 20MHz, the DDR memory can never overwhelm the internal cache. For database applications and server applications were page faults are frequent, this type of CPU would outperform the best commercially available big cache CPUs.
  • The power of this new cache should not be under estimated. It is absolute power for those willing to engage and newly contest every known area of CPU engineering. It is possible for example to build a 10 GHz 8 bit CPU with 4K cache, 1M static ram, and multi-task it across a 32 bit bus or a 64 bit bus and the customer would not be able tell the difference between a real 32 or 64 bit CPU and the high speed multiplexing version. The performance limits are set in the DDR chip which are unable to page faster than about 20MHz. The product is a 20 cent chip with power drain of about 100mA that can beat any mobile chips of today and extend battery life to 10x the current state of the art mobile processors.
  • The power of this new cache should not be under estimated to change the entire data center technology. All of it is currently crippled by the 20MHz DDR paging limit and the rest of the story is just word salad to dress up this fundamental problem. We offer a way to take out the power hungry data center CPUs and replace it with chips consuming similar power to mobile processors and still process Big Data faster than any existing technologies.
  • The six transistor cache also implements the cache aging mechanism and still clocks in under 6 transistors average per bit of cache.
  • Memory locations that have not been used for a while are the first to be offered up for fresh data while recently used locations are left untouched.
  • If an existing cached location is re-used, then it jumps to the back of the queue so that only the oldest addresses are offered up for fresh data.
  • Our business plan is privately develop or partner with a university because it is a great academic project, create the IP and, make the chip, package it as IP for CPU makers and sell to CPU makers, mobile processor makers, server CPU makers.
Login
Page last modified on December 12, 2021, at 02:23 AM