How can I incorporate safety checks or thresholds to the pipeline?

I’m building an app for kids, I need to ensure that certain topics/responses have guardrails. How does this work?

Hey Danya,

This is done through our Safety node, it analyzes text input for potentially harmful content across multiple topic categories and returns safety assessment results.:

const safetyNode = runtime.nodes.safety({
filters: [‘toxicity’, ‘pii’, ‘prompt_injection’],
threshold: 0.8
});

This checks both user input and model output. Flagged content is blocked before reaching the LLM or user.

For more information you can look into our Safety Checker Node documentation