cKnowledge.org's Projects
Development fork of the MLCommons CK2 framework (aka CM)
Collective Knowledge workflows for ArmNN
CK repository with components and automation actions to enable portable workflows across diverse platforms including Linux, Windows, MacOS and Android. It includes software detection plugins and meta packages (code, data sets, models, scripts, etc) with the possibility of multiple versions to co-exist in a user or system environment:
Automated workflows for MLPerf, the industry-leading benchmark for evaluating performance of ML software and hardware
CK-NNTest: collaboratively validating, benchmarking and optimizing neural net operators across platforms, frameworks and datasets
Collective Knowledge workflows for OpenVINO Toolkit (Deep Learning Deployment Toolkit)
Integration of PyTorch to Collective Knowledge workflow framework to provide unified CK JSON API for AI (customized builds across diverse libraries and hardware, unified AI API, collaborative experiments, performance optimization and model/data set tuning):
Qualcomm Cloud AI (QAIC) implementation of MLPerf Inference benchmarks
Collective Knowledge components for TensorFlow (code, data sets, models, packages, workflows):
Collective Knowledge repository for NVIDIA's TensorRT
Clean copy of the collection of the automation recipes (MLCommons CM scripts) without the whole CM repository
A collection of reusable and cross-platform automation recipes (CM scripts) with a human-friendly interface and minimal dependencies to make it easier to build, run, benchmark and optimize AI, ML and other applications and systems across diverse and continuously changing models, data sets, software and hardware (cloud/edge)
CM interface and automation recipes for research projects
Croissant is a high-level format for machine learning datasets that combines metadata, resource file descriptions, data structure, and default ML semantics into a single file; it works with existing datasets to make them easier to find, use, and support with tools.
Collective Knowledge platform docs
Reference implementations of inference benchmarks
This repository contains the results and code for the MLPerf™ Inference v3.0 benchmark.
Clean copy of MLPerf inference loadgen (20240402) without the whole inference repo mlcommons/inference
Clean copy of MLPerf inference tools (20240402) without the whole inference repo mlcommons/inference
Dev repo for power measurement for the MLPerf benchmarks
Artifact Evaluation Reproduction for "Software Prefetching for Indirect Memory Accesses", CGO 2017, using CK.
[NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers
Reference implementations of MLPerf™ training benchmarks