LLM Labs

LLM Labs is a platform designed for your LLM experimentation. It allows you to integrate your knowledge base and deploy LLM applications easily.

Overview

LLM Labs offers a versatile suite of tools designed to streamline your LLM journey. Here's what you can achieve within LLM Labs:

  • Effortless Integration: Connect your existing knowledge base for a unified platform that leverages your data for LLM applications.

  • Seamless Deployment: Effortlessly deploy your LLM applications and make them accessible via API, integrating them into your workflows.

  • Enhanced Experimentation (Introducing the LLM Playground): Explore the capabilities of various LLM models through the intuitive LLM Playground, fostering experimentation and refinement before deployment.

LLM Playground

LLM Playground is a key feature within LLM Labs, provides a user-friendly environment specifically designed for LLM experimentation. It allows you to:

  • Connect your preferred LLM model: Integrate your choice of LLM models to explore their functionalities.

  • Create a dedicated playground: Set up a personalized workspace for your LLM experimentation.

  • Configure your playground (LLM Application): This configuration process defines your LLM Application. It involves defining elements like:

    • Prompt Template: Craft a template that specifies the format for user prompts sent to your LLM model. This ensures consistency and clarity in user interactions.

    • Context (Vector store): Optionally, integrate a context vector store to provide additional background information to the LLM model, potentially improving its understanding and response accuracy.

    • Configuration: Fine-tune various parameters for the connected LLM model, such as temperature or token settings, to optimize its performance for your specific use case.

  • Run prompts: Test your LLM models with various prompts to evaluate their responses and refine your approach.

LLM Playground offers several advantages for LLM enthusiasts and developers:

  • Reduced Risk: Experiment with different models without committing to deployment, minimizing potential risks associated with real-world use cases.

  • Enhanced Understanding: Gain deeper insights into individual LLM models and their capabilities through hands-on experimentation.

  • Optimized Configuration: Fine-tune model parameters within the playground to achieve the best possible results for your specific needs.

  • Streamlined Development: Test and refine your LLM applications in a controlled environment before deployment, ensuring optimal performance.

Getting Started with the LLM Playground

The LLM Playground is designed for ease of use. Here's a quick guide to get you started:

Step 1: Create Your LLM Playground

  • Click on "Create Playground" to establish your dedicated workspace.

  • You can assign a descriptive name to your playground for easy identification.

Step 2: Configure the Playground Application

  • Access the "Models Configuration" section within your playground.

  • You can adjust various parameters for the connected LLM model, such as temperature or token settings.

  • Experiment with different configurations to observe their impact on the model's responses. Learn More about Models.

Step 3: Run Prompts

  • Enter your desired prompt within the designated area.

  • This prompt can be a question, a task instruction, or any text input you want the LLM model to process.

  • Click "Run" to trigger the model's response based on your prompt.

Deploying the LLM

Once you've crafted the perfect LLM configuration within the Playground, you can seamlessly transition it into a real-world application.

This configured Playground environment translates to your LLM Application. LLM Labs empowers you to effortlessly deploy your LLM Application, making it accessible via API for integration into your workflows.

The deployment process is designed for simplicity. Here's how to deploy your LLM Application:

  1. Navigate to the Deployment Page: Within the LLM Playground, locate the dedicated deployment section.

  2. Choose Your LLM Application: Select the specific LLM Application (configured Playground) you want to deploy.

  3. Deploy and Access: Initiate the deployment process. Upon successful completion, you'll be able to access your deployed LLM Application through various programming languages like cURL, Python, and Typescript, allowing you to integrate it into your development projects.

Cost Prediction Calculation

Overview

The Cost Prediction for Inference feature aims to enhance user experience by providing transparency and predictability in terms of the costs incurred during language model inference. Users can now estimate the financial implications of their inferencing activities within LLM Lab.

How to see the cost prediction?

In the meantime, we only support cost prediction for OpenAI and Azure OpenAI models.

  1. Open your LLM Application.

  2. Write your prompt query, and the predicted cost from your prompt templates will be shown.

The cost prediction will calculate the cost based on your available prompt template, the more prompt template you have the cost will be more expensive.

View Prediction Details

To see the prediction details, you just have to click the cost prediction. It will show you a dialog of the detailed cost.

You can also break down and compare the cost prediction based on your available prompt template.

Last updated