Advancing R&D with Skymind

Skymind specializes in deep learning systems research accelerating performance of deep learning workloads by optimizing them for different hardware and enabling faster performance with streamlined packaging of deep learning models in applications. This initiative allows us to push the industry standard for how deep learning models should be deployed and enables us to deploy them in more places. Our end-goal is in making deep learning accessible to everyone at reduced costs and time to market.

Advancing R&D with Skymind

Skymind specializes in deep learning systems research accelerating performance of deep learning workloads by optimizing them for different hardware and enabling faster performance with streamlined packaging of deep learning models in applications. This initiative allows us to push the industry standard for how deep learning models should be deployed and enables us to deploy them in more places. Our end-goal is in making deep learning accessible to everyone at reduced costs and time to market.

Skymind achieves this through its portfolio of products with a focus on:

  • Model optimization.
  • Experiment tracking and deployment for smoother team collaboration and greater transparency into deployed AI models.
  • AI workflow builders for non technical staff to leverage AI within their working environment.
  • Data labeling to ensure that new models are created with labels for data.

Skymind achieves this through its portfolio of products with a focus on:

  • Model optimization.
  • Experiment tracking and deployment for smoother team collaboration and greater transparency into deployed AI models.
  • AI workflow builders for non technical staff to leverage AI within their working environment.
  • Data labeling to ensure that new models are created with labels for data.

Our Projects

All of our research is developed, funded and conducted internally, and is geared toward creating and accessible, transparent and informative environment in the AI/ML space that are innovative, solve problems and have potential commercial applications.

Our Projects

All of our research is developed, funded and conducted internally, and is geared toward creating and accessible, transparent and informative environment in the AI/ML space that are innovative, solve problems and have potential commercial applications.

Technology Readiness Levels for Machine Learning Systems


The development and deployment of machine learning (ML) systems can be executed easily with modern tools, but if left unchecked, can lead to technical debt, scope creep and misaligned objectives, model misuse and failures, and expensive consequences.

We aim to bring systems engineering to AI and ML by defining and putting into action a lean Machine Learning Technology Readiness Levels (MLTRL) framework. We draw on decades of AI and ML development, from research through production, across domains and applications: for example, computer vision in medical diagnostics and consumer apps, automation in

self-driving vehicles and factory robotics, tools for scientific discovery and causal inference, streaming time-series in predictive maintenance and finance.

In addition towards prioritizing AI ethics and fairness, the standardization of MLTRL across the AI industry should help teams and organizations develop principled, safe, and trusted technologies.

Technology Readiness Levels for Machine Learning Systems


The development and deployment of machine learning (ML) systems can be executed easily with modern tools, but if left unchecked, can lead to technical debt, scope creep and misaligned objectives, model misuse and failures, and expensive consequences.

We aim to bring systems engineering to AI and ML by defining and putting into action a lean Machine Learning Technology Readiness Levels (MLTRL) framework. We draw on decades of AI and ML development, from research through production, across domains and applications: for example, computer vision in medical diagnostics and consumer apps, automation in

self-driving vehicles and factory robotics, tools for scientific discovery and causal inference, streaming time-series in predictive maintenance and finance.

In addition towards prioritizing AI ethics and fairness, the standardization of MLTRL across the AI industry should help teams and organizations develop principled, safe, and trusted technologies.

Tiny machine learning (tinyML) is the intersection of machine learning and embedded internet of things (IoT) devices. The field is an emerging engineering discipline that has the potential to revolutionize many industries. The main industry beneficiaries of tinyML are in edge computing and energy-efficient computing.

Our aim is to stimulate interest within the AI community towards more energy-efficient computing, commercial applications, new architectures, significant milestones on algorithms, networks and models as well as initial low power applications in the vision and audio space.

Advocating for Tiny Machine Learning


Advocating for Tiny Machine Learning


Tiny machine learning (tinyML) is the intersection of machine learning and embedded internet of things (IoT) devices. The field is an emerging engineering discipline that has the potential to revolutionize many industries. The main industry beneficiaries of tinyML are in edge computing and energy-efficient computing.

Our aim is to stimulate interest within the AI community towards more energy-efficient computing, commercial applications, new architectures, significant milestones on algorithms, networks and models as well as initial low power applications in the vision and audio space.

Advocating for Tiny Machine Learning


Tiny machine learning (tinyML) is the intersection of machine learning and embedded internet of things (IoT) devices. The field is an emerging engineering discipline that has the potential to revolutionize many industries. The main industry beneficiaries of tinyML are in edge computing and energy-efficient computing.

Our aim is to stimulate interest within the AI community towards more energy-efficient computing, commercial applications, new architectures, significant milestones on algorithms, networks and models as well as initial low power applications in the vision and audio space.

Automated End-to-End Optimizing Compiler for Deep Learning


There is a very high demand to incorporate machine learning into hardware devices across various industries. However, Current frameworks rely on vendor-specific operator libraries and optimize for a narrow range of server-class GPUs. Deploying workloads onto new platforms requires a significant amount of time, effort and resources.

TVM is a compiler that exposes graph-level and operator-level optimizations to provide performance portability to deep learning workloads across diverse hardware back-ends. TVM solves optimization challenges specific to deep learning, such as high-level operator fusion, mapping to arbitrary hardware primitives, and memory latency hiding. It also automates optimization of low-level programs to hardware characteristics by employing a novel, learning-based cost modeling method for rapid exploration of code optimizations. Experimental results show that TVM delivers performance across hardware back-ends that are competitive with hand-tuned libraries for low-power CPU, mobile GPU, and server-class GPUs.

We also demonstrate TVM’s ability to target new accelerator back-ends, such as the FPGA-based generic deep learning accelerator. The system is open sourced and in production use inside several major companies.

Automated End-to-End Optimizing Compiler for Deep Learning


There is a very high demand to incorporate machine learning into hardware devices across various industries. However, Current frameworks rely on vendor-specific operator libraries and optimize for a narrow range of server-class GPUs. Deploying workloads onto new platforms requires a significant amount of time, effort and resources.

TVM is a compiler that exposes graph-level and operator-level optimizations to provide performance portability to deep learning workloads across diverse hardware back-ends. TVM solves optimization challenges specific to deep learning, such as high-level operator fusion, mapping to arbitrary hardware primitives, and memory latency hiding. It also automates optimization of low-level programs to hardware characteristics by employing a novel, learning-based cost modeling method for rapid exploration of code optimizations. Experimental results show that TVM delivers performance across hardware back-ends that are competitive with hand-tuned libraries for low-power CPU, mobile GPU, and server-class GPUs.

We also demonstrate TVM’s ability to target new accelerator back-ends, such as the FPGA-based generic deep learning accelerator. The system is open sourced and in production use inside several major companies.

Why Skymind?


Leading leaders in R&D

Professionals with extensive experience with the most advanced AI technologies and resources.


Multi-pronged exploration and analysis

Continuously working to find the most optimal technical solution for each client.


State-of-the-Art Technologies

With the help of proven technologies, we seek new and innovative approaches in our solutions and services to achieve the best possible results.

Why Skymind?


Leading leaders in R&D

Professionals with extensive experience with the most advanced AI technologies and resources.


Multi-pronged exploration and analysis

Continuously working to find the most optimal technical solution for each client.


State-of-the-Art Technologies

With the help of proven technologies, we seek new and innovative approaches in our solutions and services to achieve the best possible results.

Our Collaborators

Our Collaborators

Publications

Publications

Providing Core Technology to Tech Companies


DeepLearning4J: Deep Learning with Java, Spark and Power

Deep learning on Apache Spark and Apache Hadoop with Deeplearning4j

Cisco Brings AI/ML Workloads to Hyperconverged Infrastructure

Providing Core Technology to Tech Companies


DeepLearning4J: Deep Learning with Java, Spark and Power

Deep learning on Apache Spark and Apache Hadoop with Deeplearning4j

Cisco Brings AI/ML Workloads to Hyperconverged Infrastructure

Developing A.I. Supercomputer


2016 NVIDIA

World’s 1st A.I. Supercomputer

2018 HUAWEI

World’s 2nd A.I. Supercomputer

2019 SKYMIND & CHINESE ACADEMY OF SCIENCES

Proprietary A.I. Supercomputer

Developing A.I. Supercomputer


2016 NVIDIA

World’s 1st A.I. Supercomputer

2018 HUAWEI

World’s 2nd A.I. Supercomputer

2019 SKYMIND & CHINESE ACADEMY OF SCIENCES

Proprietary A.I. Supercomputer

Providing A.I. Tech for COVID-19 Research


A.I. tech for COVID-19 research at Tunku Azizah Hospital

The groundbreaking way to search lungs for signs of COVID-19

AI tool which analysed COVID-19 in Wuhan available to NHS

Providing A.I. Tech for COVID-19 Research


A.I. tech for COVID-19 research at Tunku Azizah Hospital

The groundbreaking way to search lungs for signs of COVID-19

AI tool which analysed COVID-19 in Wuhan available to NHS

Powering A.I. Processors


Powering A.I. Processors