diff --git a/data/C4DT/projects.yaml b/data/C4DT/projects.yaml index 9f66ef2..6022c2b 100644 --- a/data/C4DT/projects.yaml +++ b/data/C4DT/projects.yaml @@ -124,3 +124,31 @@ projects: title: Report 2024 matrix.epfl.ch date_added: 2025-03-04 date_updated: 2025-03-04 + + showcase_v2: + name: Showcase-NG + categories: + - Other + applications: + - Info + type: Application + description: > + The next generation of our showcase updates the UI to present our latest projects in a more 2025 compatible + layout. + layman_desc: > + The C4DT showcase is the list of all digital-trust related software projects from our affiliated labs. + It is the first contact point for finding new projects, mainly used internally to communicate with our partners. + For every project you find a short description, and links to the papers and software. + Some of the projects have been evaluated by the C4DT Factory, worked on, or presented as hands-on workshops. + tags: + - Database + incubator: + type: incubated_market + work: 2025/Q2 - active usage + url: https://showcase.c4dt.org + # information: + # - type: Article + # url: https://c4dt.epfl.ch/article/report-2024-matrix-epfl-ch/ + # title: Report 2024 matrix.epfl.ch + date_added: 2025-03-04 + date_updated: 2025-03-04 diff --git a/data/DCL/projects.yaml b/data/DCL/projects.yaml index 96b3ac5..95ff54a 100644 --- a/data/DCL/projects.yaml +++ b/data/DCL/projects.yaml @@ -644,3 +644,63 @@ projects: url: https://dl.acm.org/doi/proceedings/10.1145/3575693 date_added: 2023-03-13 date_updated: 2024-03-22 + + inference4all: + name: Inference 4 all + categories: + - Privacy + - Blockchain + applications: + - Infro + type: Application + description: Distributed ML inference across office computers for privacy + tech_desc: > + The system dynamically distributes large ML model inference workloads across + heterogeneous local computing resources with fault tolerance capabilities. + It employs an intermediate representation dialect that enables seamless + integration of new models without requiring manual hardcoding for each + application. + Unlike static distribution systems, Inference4all supports dynamic client + connections/disconnections and can efficiently partition large models + (up to 70B parameters) across limited hardware resources like standard + office computers. + layman_desc: > + Inference4all enables companies to run complex AI models directly on their + existing office computers instead of sending data to external datacenters. + This approach preserves data privacy by keeping sensitive information within + the organization while still allowing access to powerful AI capabilities. + By distributing computational tasks across regular machines like laptops and + desktops, organizations can leverage AI without expensive hardware investments + or privacy concerns. + language: C + tags: + - Decentralized + - "Machine Learning" + - "Byzantine Resilience" + information: + - type: Paper + title: "The Vital Role of Gradient Clipping in Byzantine-Resilient Distributed Learning" + url: https://arxiv.org/abs/2405.14432 + - type: Paper + title: "Byzantine-Robust Federated Learning: Impact of Client Subsampling and Local" + url: https://arxiv.org/abs/2402.12780 + notes: + - label: Published at + text: ICML'24 + url: https://dl.acm.org/doi/10.5555/3692070.3692116 + - type: Paper + title: "Chop Chop: Byzantine Atomic Broadcast to the Network Limit" + url: https://infoscience.epfl.ch/entities/publication/d83dd7af-b83d-4f99-b50a-0b6d11e786be + notes: + - label: Published at + text: OSDI 2024 + url: https://www.usenix.org/conference/osdi24/presentation/camaioni + - type: Paper + title: "Robust Distributed Learning: Tight Error Bounds and Breakdown Point under Data Heterogeneity" + url: https://infoscience.epfl.ch/entities/publication/83afd663-8967-40d7-a0d9-5418371431f0 + notes: + - label: Published at + text: NeurIPS 2023 + url: https://papers.nips.cc/paper_files/paper/2023/hash/8f182e220092f7f1fc44f3313023f5a0-Abstract-Conference.html + date_added: 2025-03-21 + date_updated: 2025-03-21 diff --git a/resources/products/images/inference4all/logo.png b/resources/products/images/inference4all/logo.png new file mode 100644 index 0000000..4135806 Binary files /dev/null and b/resources/products/images/inference4all/logo.png differ diff --git a/views/products/demo/inference4all.tpl b/views/products/demo/inference4all.tpl new file mode 100644 index 0000000..07571d8 --- /dev/null +++ b/views/products/demo/inference4all.tpl @@ -0,0 +1,22 @@ + + +

+Today Large Language Models (LLM) and other big Machine Learning (ML) models take the upfront of the stage. These models can now be trained for specific, customized solutions. But running these models, doing inference on a new dataset, still requires access to a big datacenter. +

+What if a company or an organization doesn't have access to a datacenter, or if the input data is too confidential? We propose to run the inference across existing computers in the office, like MacBooks and PCs. This eliminates the need for expensive hardware or cloud services, and keeps data secure. +

+ +

Our Solution

+ +

+We fully automate and optimize the distributed deployment of ML models for training and inference, dynamically leveraging available local machines (MacBooks, PCs, internal servers). Our high-performance, secure solution is ideal for companies seeking local ML usage with sovereignty and scalability. +

+The Unique Selling Points of our solution are: +

+
    +
  1. Simplicity – Clients can focus on their business applications while our solution transparently handles distributed deployment. +
  2. Efficiency – Clients can utilize existing machines, maximizing available computing power—even across heterogeneous hardware. +
  3. Scalability – Large models can be run locally; for example, 4 MacBooks are enough to run a 70B-parameter model +
  4. Privacy – Our solution enables organizations to leverage AI’s power locally without relying on untrusted providers. +