Skip to content
Draft

WIP #344

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 28 additions & 0 deletions data/C4DT/projects.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -124,3 +124,31 @@ projects:
title: Report 2024 matrix.epfl.ch
date_added: 2025-03-04
date_updated: 2025-03-04

showcase_v2:
name: Showcase-NG
categories:
- Other
applications:
- Info
type: Application
description: >
The next generation of our showcase updates the UI to present our latest projects in a more 2025 compatible
layout.
layman_desc: >
The C4DT showcase is the list of all digital-trust related software projects from our affiliated labs.
It is the first contact point for finding new projects, mainly used internally to communicate with our partners.
For every project you find a short description, and links to the papers and software.
Some of the projects have been evaluated by the C4DT Factory, worked on, or presented as hands-on workshops.
tags:
- Database
incubator:
type: incubated_market
work: 2025/Q2 - active usage
url: https://showcase.c4dt.org
# information:
# - type: Article
# url: https://c4dt.epfl.ch/article/report-2024-matrix-epfl-ch/
# title: Report 2024 matrix.epfl.ch
date_added: 2025-03-04
date_updated: 2025-03-04
60 changes: 60 additions & 0 deletions data/DCL/projects.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -644,3 +644,63 @@ projects:
url: https://dl.acm.org/doi/proceedings/10.1145/3575693
date_added: 2023-03-13
date_updated: 2024-03-22

inference4all:
name: Inference 4 all
categories:
- Privacy
- Blockchain
applications:
- Infro
type: Application
description: Distributed ML inference across office computers for privacy
tech_desc: >
The system dynamically distributes large ML model inference workloads across
heterogeneous local computing resources with fault tolerance capabilities.
It employs an intermediate representation dialect that enables seamless
integration of new models without requiring manual hardcoding for each
application.
Unlike static distribution systems, Inference4all supports dynamic client
connections/disconnections and can efficiently partition large models
(up to 70B parameters) across limited hardware resources like standard
office computers.
layman_desc: >
Inference4all enables companies to run complex AI models directly on their
existing office computers instead of sending data to external datacenters.
This approach preserves data privacy by keeping sensitive information within
the organization while still allowing access to powerful AI capabilities.
By distributing computational tasks across regular machines like laptops and
desktops, organizations can leverage AI without expensive hardware investments
or privacy concerns.
language: C
tags:
- Decentralized
- "Machine Learning"
- "Byzantine Resilience"
information:
- type: Paper
title: "The Vital Role of Gradient Clipping in Byzantine-Resilient Distributed Learning"
url: https://arxiv.org/abs/2405.14432
- type: Paper
title: "Byzantine-Robust Federated Learning: Impact of Client Subsampling and Local"
url: https://arxiv.org/abs/2402.12780
notes:
- label: Published at
text: ICML'24
url: https://dl.acm.org/doi/10.5555/3692070.3692116
- type: Paper
title: "Chop Chop: Byzantine Atomic Broadcast to the Network Limit"
url: https://infoscience.epfl.ch/entities/publication/d83dd7af-b83d-4f99-b50a-0b6d11e786be
notes:
- label: Published at
text: OSDI 2024
url: https://www.usenix.org/conference/osdi24/presentation/camaioni
- type: Paper
title: "Robust Distributed Learning: Tight Error Bounds and Breakdown Point under Data Heterogeneity"
url: https://infoscience.epfl.ch/entities/publication/83afd663-8967-40d7-a0d9-5418371431f0
notes:
- label: Published at
text: NeurIPS 2023
url: https://papers.nips.cc/paper_files/paper/2023/hash/8f182e220092f7f1fc44f3313023f5a0-Abstract-Conference.html
date_added: 2025-03-21
date_updated: 2025-03-21
Binary file added resources/products/images/inference4all/logo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
22 changes: 22 additions & 0 deletions views/products/demo/inference4all.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
<img src="../../../resources/products/images/inference4all/logo.png" width="30%"
style="float: left;" class="dark_invert"/>

<p>
Today Large Language Models (LLM) and other big Machine Learning (ML) models take the upfront of the stage. These models can now be trained for specific, customized solutions. But running these models, doing inference on a new dataset, still requires access to a big datacenter.
</p><p>
What if a company or an organization doesn't have access to a datacenter, or if the input data is too confidential? We propose to run the inference across existing computers in the office, like MacBooks and PCs. This eliminates the need for expensive hardware or cloud services, and keeps data secure.
</p>

<h3>Our Solution</h3>

<p>
We fully automate and optimize the distributed deployment of ML models for training and inference, dynamically leveraging available local machines (MacBooks, PCs, internal servers). Our high-performance, secure solution is ideal for companies seeking local ML usage with sovereignty and scalability.
</p><p>
The Unique Selling Points of our solution are:
</p>
<ol>
<li>Simplicity – Clients can focus on their business applications while our solution transparently handles distributed deployment.
</li><li>Efficiency – Clients can utilize existing machines, maximizing available computing power—even across heterogeneous hardware.
</li><li>Scalability – Large models can be run locally; for example, 4 MacBooks are enough to run a 70B-parameter model
</li><li>Privacy – Our solution enables organizations to leverage AI’s power locally without relying on untrusted providers.
</li>