We’ve got an in-house LLM we want to plug in here.
Hey GGLighthouse,
Yes! The LLM node accepts custom endpoints:
const llmNode = runtime.nodes.llm({
provider: ‘custom’,
endpoint: ‘https://your-llm-api.com/v1/chat’,
model: ‘your-model-name’
});
Just match the OpenAI-compatible request/response format and you’re good ![]()