Fix: Model Selection Bug With OpenAI Adapter
Introduction
In this article, we will delve into a recent bug discovered in CodeCompanion, specifically affecting model selection when using the openai_compatible adapter with multiple models. This issue, reported after updating to version 17.33.0, prevents users from selecting the desired model, limiting them to only the first model available. We will explore the details of the bug, the steps to reproduce it, the expected behavior, and the fix implemented to resolve it. This issue is particularly relevant for developers and users who rely on CodeCompanion for their coding assistance needs and utilize various models through the openai_compatible adapter. Understanding the root cause and resolution of this bug ensures a smoother and more efficient coding experience.
Understanding the Issue
The core problem lies in how CodeCompanion handles model selection with the openai_compatible adapter. Prior to version 17.33.0, model selection functioned correctly, allowing users to choose from a list of available models. However, after the update, the model selection process failed to display all available models, instead showing only the first one. This issue arises when there are multiple models configured for use with the openai_compatible adapter. The bug was traced to a specific section of the codebase responsible for iterating over the available models and presenting them in the selection dialog. The openai_compatible adapter returns models in a simple array format, whereas the code expected an associative table indexed by model name. This discrepancy led to an incomplete list of models being displayed, effectively hindering the user's ability to switch between different models.
Reproducing the Bug
To reproduce this bug, you need to set up an environment where CodeCompanion is configured to use the openai_compatible adapter with more than one model available. Here are the steps to reproduce the issue:
- Ensure Multiple Models are Available: You can achieve this by running Ollama locally with multiple models installed or by using a mock server that simulates multiple models. The provided
dummy-ollamascript is an excellent way to simulate this environment. This script creates a mock HTTP server that serves a list of models, such as "llama4", "gpt-4.1", and "openai-o3". - Set Up Neovim with CodeCompanion: Use a minimal Neovim configuration file (
minimal.lua) to ensure a clean environment. This file should load thelazy.nvimplugin manager and configure CodeCompanion with theopenai_compatibleadapter. - Open the Chat Interface: Start Neovim with the minimal configuration using the command
nvim -u minimal.lua. Then, open the CodeCompanion chat interface by running the command:CodeCompanionChat. Initiate the adapter/model change process by typingga. - Select the Adapter: Choose the
openai_compatibleadapter from the list, which is typically the first option. - Observe the Bug: When the model selection dialog appears, it will only display a single model, which is the currently selected one (e.g., "llama4" if using the dummy script). The expected behavior is to see a list of all available models.
By following these steps, you can reliably reproduce the bug and verify that the model selection is not functioning as expected.
Expected Behavior
The expected behavior of CodeCompanion's model selection is to display a comprehensive list of all available models when the user attempts to change the model. This allows users to seamlessly switch between different models based on their specific needs and preferences. In the scenario described, with multiple models configured for the openai_compatible adapter, the model selection dialog should present a list containing each model's name, such as:
- llama4
- gpt-4.1
- openai-o3
This ensures that users can easily select the desired model and continue their work without interruption. The bug, however, prevents this expected behavior, limiting the user to a single model and hindering the flexibility that CodeCompanion is designed to provide. Restoring the correct model selection behavior is crucial for maintaining a smooth and efficient user experience.
Root Cause Analysis
The root cause of this bug lies in the discrepancy between how the openai_compatible adapter returns the list of available models and how the CodeCompanion code processes this list. Specifically, the openai_compatible adapter returns an array of model names, while the code in change_adapter.lua expects an associative table (a dictionary-like structure) where model names are keys. This expectation is evident in the get_models_list function, which iterates over the models using a map function that assumes a key-value structure.
In the faulty version of the code, the map function encountered a situation where the key was a string (the model name) and the value was nil. This occurred because the openai_compatible adapter provided a simple array, not an associative table. Consequently, the map function dropped these elements, resulting in an empty list of models. The code then added the currently selected model to the list, which is why only one model was displayed in the selection dialog.
The issue highlights the importance of consistent data structures across different parts of the codebase and the need for robust handling of various data formats. To resolve this, the code needs to either adapt to handle simple arrays of model names or the openai_compatible adapter needs to be modified to return an associative table.
The Fix
The fix for this bug involves modifying the get_models_list function in change_adapter.lua to correctly handle arrays of model names. Instead of assuming an associative table, the code now checks if the models variable is an array. If it is, it iterates over the array and includes each model name in the list. This ensures that the model selection dialog displays all available models, regardless of the data structure returned by the adapter.
The corrected code snippet looks like this:
local function get_models_list(current_model, models) -- Check if models is an array if vim.tbl_isarray(models) then return vim.list_extend({ current_model }, models) else local model_list = vim.tbl_map(function(key, value) if value then return key end end, models) return vim.list_extend({ current_model }, vim.tbl_filter(function(model) return model end, model_list) or {}) endend
This fix ensures that CodeCompanion can correctly process the list of models returned by the openai_compatible adapter, restoring the expected behavior of the model selection dialog. Users can now seamlessly switch between different models, enhancing their coding experience.
Implementation Details
The implementation of the fix involved a careful analysis of the existing code and a targeted modification to ensure compatibility with the openai_compatible adapter's data structure. The key change was the addition of a check to determine if the models variable is an array using vim.tbl_isarray(models). This check allows the code to differentiate between an associative table and a simple array.
If the models variable is an array, the code directly extends a list containing the current_model with the array of model names. This is a straightforward way to gather all available models into a single list. If the models variable is not an array (i.e., it is an associative table), the original logic is preserved, ensuring that other adapters that return models in this format continue to function correctly.
The use of vim.list_extend is crucial for efficiently combining lists in Lua, and vim.tbl_filter is used to remove any nil values that might result from the mapping process. This ensures a clean list of model names is presented to the user.
This targeted fix minimizes the risk of introducing regressions and ensures that CodeCompanion remains robust and flexible in handling different adapter implementations.
Conclusion
The bug in CodeCompanion's model selection with the openai_compatible adapter highlighted the importance of handling different data structures and maintaining consistency across the codebase. The fix implemented addresses this issue by correctly processing arrays of model names, ensuring that users can seamlessly switch between different models. This resolution enhances the user experience and reinforces CodeCompanion's reliability as a coding assistance tool.
By understanding the root cause and the steps taken to resolve it, developers and users can appreciate the importance of thorough testing and careful code analysis. This bug fix not only resolves an immediate issue but also contributes to the overall stability and usability of CodeCompanion.
For more information on CodeCompanion and its features, visit the official documentation. You can also explore other related topics and discussions on relevant coding and development forums. For additional insights into Neovim plugin development and troubleshooting, consider visiting the official Neovim documentation and community resources, such as the discussions on GitHub.