We are committed to provide our clients and partners universal, easy-to-use, efficient, scalable, flexible and lowest power FPGA based machine learning inference platforms. Our AIScale architecture in combination with our DeepCompressor serves clients in the fields of computer vision, robotics, speech recognition, surveillance systems as well as data centers. Neural network acceleration from edge- to server devices.

Kortiq´s novel way of mapping calculations to hardware resources in combination with highly advanced compression methods, which offer a significant reduction in required external memory transfer size and power, enable our clients in the above industries to achieve fast turnaround from idea to product, with having an efficient and economic solution in mind.

Apples or Pears?

SOLVE THIS CLASSIFICATION CHALLENGE WITH KORTIQ AISCALE

1

The Challenge

Detecting and recognizing objects might be a simple task for a human, but when it comes to automatic detection and recognition by a high-end embedded vision system it might become very challenging to solve this problem efficiently.

2

Capturing

Razor-sharp images delivered by high-end cameras are the solid basis to master the challenge. But traditional image processing algorithms and pattern recognition might not be up to solving that task. It becomes very complicated.

3

Classification

What makes the pear different from the apple. To train a Convolutional Neural Network with a lot of different pictures from apples and pears makes it much easier to classify the pear and can increase the accuracy in detection. TensorFlow and CPU clusters or GPUs will help speed up the training phase.

NEXT

Apples or Pears?

CONVOLUTIONAL NEURAL NETWORK CAN HELP TO MASTER THE CHALLENGE

FEATURE MAPS

= "PEAR"

CONVOLUTION LAYERS

FULLY-CONNECTED
LAYERS

POOLING LAYERS

INPUT IMAGE

NEXT

Integrate KORTIQ AIScale

With KORTIQ AIScale CNN Hardware Accelerator IP we can help you to recognize WHAT you see

Integration of AIScale CNN with ZYNQ SOC

NEXT

Integrate KORTIQ

KORTIQ AIScale Deep Compressor will massively shrink your selected network

Trained Network

Compression of trained network
with KORTIQ AIScale DeepCompressor

Compressed Network

NEXT

Integrate KORTIQ

With KORTIQ AIScale we help you to recognize WHAT you see

Compressed Network

Translate to FPGA with
TensorFlow2AIScale translator

= "PEAR"

NEXT

Apples or Pears or even Persons ?

MEET THE CHALLENGE AND SOLVE IT WITH KORTIQ

AGAIN

20-30% improvement in FPS possible with next version in April 2018

KY3 CNN Total # of Parameters: 3.946.416

KY3 CNN Total Number of Operations per Input Image: 428.603.392

AlexNet CNN Total # of Parameters: 60.963.848

AlexNet CNN Total # of Operations/Input Image: 725.508.992

VGG-16 CNN Total # of Parameters: 138.353.320

VGG-16 CNN Total # of Operations/Input Image: 15.476.385.792
 

 
 

CNN.INIT

FIRST: Initialize reconfigurable structure

Use a dedicated IF fuction to initialize the network.

Your network can be any CNN e.g. ResNet, AlexNet, Tiny Yolo, VGG16 …

AIScale will be configured based on pre-trained network models using TensorFlow, AIScale DeepCompressor and AIScale TF2AIScale Translator.

No need to generate different hardware architectures per CNN

No need for SW programming (C, C++, OpenCL)

No need to learn how to use specific libraries

No need to learn which functions to use with what parameters

 

CNN.RUN

SeconD: RUN the NETwork

Once configured and initialized, the AIScale accelerator will act as

3D CONVOLUTION

DEPTHWISE CONVOLUTION

POOLING

ADDING

FULLY-CONNECTED layer

based on the chosen network structure. Activation functions are executed as a post-processing step of each layer

VIDEO – Kortiq Small and Efficient CNN Accelerator: Powered by Xilinx

 

Kortiq provides an easy to use, scalable and small form factor CNN accelerator. The device supports all types of CNN and dynamically accelerates different layer types found in the network. The Xilinx Zynq family of SoCs and MPSoCs help Kortiq devices achieve targeted performance levels and flexibility, while being cost-effective.

AIScale CC (MAC)

The Re-configurable Compute Core is the heart of our AI Scale accelerator and provides exceeding flexibility and scalability. The small footprint is based on coarse-grained true re-configurable computing principle and architecture.

AIScale CC supports and processes Convolutional-, Pooling-, Adding- and Fully-Connected layers. Based on your needs in size, frames per second or accuracy the accelerator can be parameterized from very few CC to several 100 CC.

Make advantage of a hardwired, optimized network with opportunity to switch between different CNN solutions based on customers needs using pre-trained network parameters. It can be structured for low latency and custom memory allocations

Colleague Classification @ 27fps with AIScale Hardware Accelerator IP using 32 Compute Cores @ 120 MHz with our KortiqY3 network.
This can e.g. be implemented in a cost optimized Zynq device.

AIScale DeepCompressor

Tensorflow2AIScale Translator

AIScale CNN Hardware Accelerator IP

Download Data Sheet

Please register here to download your copy of the AIScaleCDP2 IP Core (Preliminary Datasheet)
  • Dieses Feld dient zur Validierung und sollte nicht verändert werden.

KORTIQ – a FROBAS brand

FROBAS GmbH
Gebrüder-Eicher-Ring 45
85659 Forstern, Germany

Phone: +49 8124 91890 03
Fax: +49 8124 91890 55
office(at)kortiq.com
www.kortiq.com

Commercial Register B München: HRB 149294
VAT-IdNr.: DE813816081

Get in touch

  • Dieses Feld dient zur Validierung und sollte nicht verändert werden.