Skip to content

Commit 94fdfac

Browse files
committed
[alessio to revert] a commit with examples
a
1 parent d991ac9 commit 94fdfac

7 files changed

+13
-1
lines changed

docs/source/tutorials/how_to_compare_two_ai_models_with_label_studio.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,8 @@ thumbnail: /images/tutorials/tutorials-compare-ai-models.png
1212
meta_title: How to Compare Two AI Models with Label Studio
1313
meta_description: Learn how to compare and evaluate two AI models with the Label Studio SDK.
1414
is_enterprise: true
15-
is_starter_cloud: true
1615
badges: SDK, Agreement, CodeLab
16+
duration: 10-15 mins
1717
---
1818

1919
## Why this matters

docs/source/tutorials/how_to_connect_Hugging_Face_with_Label_Studio_SDK.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,9 @@ report_bug_url: https://github.com/HumanSignal/awesome-label-studio-tutorials/is
1111
thumbnail: /images/tutorials/tutorials-hugging-face-ls-sdk.png
1212
meta_title: How to Connect Hugging Face with Label Studio SDK
1313
meta_description: Learn how to create a NLP workflow by integrating Hugging Face datasets and models with Label Studio for annotation and active learning.
14+
is_starter_cloud: true
15+
badges: Agreement, CodeLab
16+
duration: 10-15 mins
1417
---
1518
**A Complete Guide to Connecting Hugging Face and Label Studio**
1619

docs/source/tutorials/how_to_create_a_Benchmark_and_Evaluate_your_models_with_Label_Studio.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ report_bug_url: https://github.com/HumanSignal/awesome-label-studio-tutorials/is
1111
thumbnail: /images/tutorials/tutorials-ai-benchmark-and-eval.png
1212
meta_title: How to Connect Hugging Face with Label Studio SDK
1313
meta_description: Learn how to use the Label Studio SDK to create a high-quality benchmark dataset to evaluate multiple AI models
14+
badges: SDK, Agreement, CodeLab
15+
duration: 10-15 mins
1416
---
1517
Evaluating models is only as good as the benchmark you test them against.
1618
In this tutorial, you'll learn how to use **Label Studio** to create a high-quality benchmark dataset, label it with human expertise, and then evaluate multiple AI models against it — all using the **Label Studio SDK**.

docs/source/tutorials/how_to_debug_agents_with_LangSmith_and_Label_Studio.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ report_bug_url: https://github.com/HumanSignal/awesome-label-studio-tutorials/is
1111
thumbnail: /images/tutorials/tutorials-debug-agents-langsmith.png
1212
meta_title: How to Debug Agents with LangSmith and Label Studio
1313
meta_description: Learn how LangSmith and Label Studio can work together to debug and evaluate AI Agents.
14+
is_enterprise: true
15+
duration: 10-15 mins
1416
---
1517
## 0. Label Studio Requirements
1618

docs/source/tutorials/how_to_embed_evaluation_workflows_in_your_research_stack_with_Label_Studio.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,9 @@ report_bug_url: https://github.com/HumanSignal/awesome-label-studio-tutorials/is
1111
thumbnail: /images/tutorials/tutorials-eval-flows-research-stack.png
1212
meta_title: How to Embed Evaluation Workflows in Your Research Stack with Label Studio
1313
meta_description: Learn how to build an embedded evaluation workflow directly into your jupyer notebook with Label Studio.
14+
is_enterprise: true
15+
is_starter_cloud: true
16+
duration: 10-15 mins
1417
---
1518
## Label Studio Requirements
1619

docs/source/tutorials/how_to_measure_inter_annotator_agreement_and_build_human_consensus.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ report_bug_url: https://github.com/HumanSignal/awesome-label-studio-tutorials/is
1111
thumbnail: /images/tutorials/tutorials-inter-annotator-agreement-and-consensus.png
1212
meta_title: "How to Measure Inter-Annotator Agreement and Build Human Consensus with Label Studio"
1313
meta_description: Learn how to measure inter-annotator agreement, build human consensus, establish ground truth and compare model predictions using the Label Studio SDK.
14+
duration: 10-15 mins
1415
---
1516

1617
This tutorial walks through a practical workflow to measure inter-annotator agreement, build human consensus, establish ground truth and

docs/source/tutorials/how_to_multi_turn_chat_evals_with_chainlit_and_label_studio.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ report_bug_url: https://github.com/HumanSignal/awesome-label-studio-tutorials/is
1111
thumbnail: /images/tutorials/tutorials-eval-multi-turn-chainlit.png
1212
meta_title: "How to Evaluate Multi-Turn AI Conversations with Chainlit and Label Studio"
1313
meta_description: Learn how to create a Label Studio project for evaluating chatbot conversations using the Chatbot Evaluation template.
14+
duration: 10-15 mins
1415
---
1516
This notebook demonstrates how to create a Label Studio project for evaluating chatbot conversations using the Chatbot Evaluation template.
1617

0 commit comments

Comments
 (0)