Zeno Instance Views are modular renderers for different data types and tasks.
Each of the following views can be passed as the
view option to a TOML configuration file. To create a new or custom view see Creating a view
|image-classification||Display images with ground truth and predicted class labels. Works for both binary and multiclass classification. Requires image inputs and text or numeric outputs.|
|text-classification||Display text with ground truth and predicted class labels. Requires text inputs and text or numeric outputs.|
|audio-transcription||Display audio file along with outputed text, e.g. transcription. Requires audio inputs and text outputs.|
|image-segmentation||Display image with overlayed ground truth and predicted segmentation masks. Works for both binary segmentation. Requires image inputs and binary image outputs.|
|code-generation||Show formatted code input and code predictions. Use for evaluating code generation models such as Codex.|
|openai-chat||Show input-output pairs from chatbot models using the OpenAI API. See the API documentation for details on the required Chat data format.|
|openai-chat-markdown||Show input-output pairs from chatbot models using the OpenAI API. See the API documentation for details on the required Chat data format. |
This view is similar to the `openai-chat` view, but it renders assistant blocks and labels with markdown syntax to enable representing more complex queries like tools.
|chatbot||Show a single input-output pair from chatbot models.|
|space-separated-values||Table view of inputs, outputs, and labels which are space-separated words. Useful for tasks such as part-of-speech tagging.|