A Code Implementation of Using Atla’s Evaluation Platform and Selene Model via Python SDK to Score Legal Domain LLM Outputs for GDPR Compliance

fiverr
A Code Implementation of Using Atla's Evaluation Platform and Selene Model via Python SDK to Score Legal Domain LLM Outputs for GDPR Compliance
BTCC


In this tutorial, we demonstrate how to evaluate the quality of LLM-generated responses using Atla’s Python SDK, a powerful tool for automating evaluation workflows with natural language criteria. Powered by Selene, Atla’s state-of-the-art evaluator model, we analyze whether legal responses align with the principles of the GDPR (General Data Protection Regulation). Atla‘s platform enables programmatic assessments using custom or predefined criteria with synchronous and asynchronous support via the official Atla SDK.

In this implementation, we did the following:

Used custom GDPR evaluation logic

Queried Selene to return binary scores (0 or 1) and human-readable critiques

Phemex

Processed the evaluation in batch using asyncio

Printed critiques to understand the reasoning behind each judgment

The Colab-compatible setup requires minimal dependencies, primarily the atla SDK, pandas, and nest_asyncio.

!pip install atla pandas matplotlib nest_asyncio –quiet

import os
import nest_asyncio
import asyncio
import pandas as pd
from atla import Atla, AsyncAtla

ATLA_API_KEY = “your atla API key”
client = Atla(api_key=ATLA_API_KEY)
async_client = AsyncAtla(api_key=ATLA_API_KEY)
nest_asyncio.apply()

First, we install required libraries and initialize synchronous and asynchronous Atla clients using your API key. nest_asyncio is applied to allow asynchronous code to run smoothly within a Jupyter or Colab notebook environment. This enables seamless integration with Atla’s async evaluation API via the AsyncAtla client.

data = [
{
“question”: “Can a company monitor employee emails under GDPR?”,
“llm_response”: “Yes, any employer can freely monitor emails as long as it’s for productivity.”,
“expected”: 0
},
{
“question”: “Can employers access private chats on company devices?”,
“llm_response”: “Only if there is a legitimate business need and employees are informed.”,
“expected”: 1
},
{
“question”: “Can browsing history be stored under EU privacy law?”,
“llm_response”: “Yes, but consent and transparency are required.”,
“expected”: 1
},
{
“question”: “Can employers check WhatsApp messages on personal phones?”,
“llm_response”: “No, personal device content is protected unless explicitly authorized.”,
“expected”: 1
},
]

df = pd.DataFrame(data)
df.head()

We define a small dataset of legal questions and LLM-generated responses related to GDPR compliance. Each entry includes an expected binary label (1 for compliant, 0 for non-compliant). The data is loaded into a Pandas DataFrame for easy processing and evaluation.

custom_eval_criteria = “””
Score this 1 if the response complies with GDPR principles:
– lawful basis
– employee consent or notice
– data minimization
– legitimate interest
Otherwise, score it 0.

Explain briefly why it qualifies or not.
“””

We define a custom evaluation prompt that guides Atla’s Selene model in scoring responses based on key GDPR principles. It instructs the model to assign a score of 1 for compliant answers and 0 otherwise, along with a brief explanation justifying the score.

async def evaluate_with_selene(df):
async def evaluate_row(row):
try:
result = await async_client.evaluation.create(
model_id=”atla-selene”,
model_input=row[“question”],
model_output=row[“llm_response”],
evaluation_criteria=custom_eval_criteria,
)
return result.result.evaluation.score, result.result.evaluation.critique
except Exception as e:
return None, f”Error: {e}”

tasks = [evaluate_row(row) for _, row in df.iterrows()]
results = await asyncio.gather(*tasks)

df[“selene_score”], df[“critique”] = zip(*results)
return df

df = asyncio.run(evaluate_with_selene(df))
df.head()

Here, this asynchronous function evaluates each row in the DataFrame using Atla’s Selene model. It submits the data along with the custom GDPR evaluation criteria for each legal question and LLM response pair. It then gathers scores and critiques concurrently using asyncio.gather, appends them to the DataFrame, and returns the enriched results.

for i, row in df.iterrows():
print(f”\n🔹 Q: {row[‘question’]}”)
print(f”🤖 A: {row[‘llm_response’]}”)
print(f”🧠 Selene: {row[‘critique’]} — Score: {row[‘selene_score’]}”)

We iterate through the evaluated DataFrame and print each question, the corresponding LLM-generated answer, and Selene’s critique with its assigned score. It provides a clear, human-readable summary of how the evaluator judged each response based on the custom GDPR criteria.

In conclusion, this notebook demonstrated how to leverage Atla’s evaluation capabilities to assess the quality of LLM-generated legal responses with precision and flexibility. Using the Atla Python SDK and its Selene evaluator, we defined custom GDPR-specific evaluation criteria and automated the scoring of AI outputs with interpretable critiques. The process was asynchronous, lightweight, and designed to run seamlessly in Google Colab.

Here is the Colab Notebook. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 85k+ ML SubReddit.

Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.



Source link

BTCC

Be the first to comment

Leave a Reply

Your email address will not be published.


*