Contributing to LMM-Vibes
Thank you for your interest in contributing to LMM-Vibes! This guide will help you get started.
Getting Started
Prerequisites
- Python 3.8 or higher
- Git
- Basic knowledge of Python and machine learning
Setting Up Development Environment
-
Fork the Repository
bash # Fork on GitHub, then clone your fork git clone https://github.com/your-username/LMM-Vibes.git cd LMM-Vibes
-
Install Development Dependencies ```bash # Install in development mode pip install -e .
# Install development dependencies pip install -r requirements-dev.txt ```
- Set Up Pre-commit Hooks
bash # Install pre-commit hooks pre-commit install
Development Workflow
1. Create a Feature Branch
# Create and switch to a new branch
git checkout -b feature/your-feature-name
# Or for bug fixes
git checkout -b fix/your-bug-description
2. Make Your Changes
- Write your code following the Code Style guidelines
- Add tests for new functionality
- Update documentation as needed
3. Test Your Changes
# Run the test suite
pytest
# Run with coverage
pytest --cov=lmmvibes
# Run linting
flake8 lmmvibes/
black lmmvibes/
4. Commit Your Changes
# Stage your changes
git add .
# Commit with a descriptive message
git commit -m "feat: add new evaluation metric"
# Push to your fork
git push origin feature/your-feature-name
5. Create a Pull Request
- Go to your fork on GitHub
- Click "New Pull Request"
- Select your feature branch
- Fill out the PR template
- Submit the PR
Code Style
Python Code
We follow PEP 8 with some modifications:
- Line Length: 88 characters (Black default)
- Docstrings: Google style
- Type Hints: Required for all public functions
Example
from typing import List, Dict, Optional
def evaluate_model(
data: List[Dict],
metrics: List[str] = ["accuracy"],
config: Optional[Dict] = None
) -> Dict:
"""Evaluate model performance on given data.
Args:
data: List of dictionaries containing evaluation data
metrics: List of metric names to compute
config: Optional configuration dictionary
Returns:
Dictionary containing evaluation results
Raises:
EvaluationError: If evaluation fails
"""
# Your implementation here
pass
Documentation
- All public functions must have docstrings
- Use Google style docstrings
- Include type hints
- Add examples for complex functions
Testing
- Write tests for all new functionality
- Aim for at least 80% code coverage
- Use descriptive test names
- Test both success and failure cases
Example Test
import pytest
from lmmvibes.evaluation import evaluate_model
def test_evaluate_model_basic():
"""Test basic model evaluation functionality."""
data = [
{"question": "What is 2+2?", "answer": "4", "model_output": "4"}
]
results = evaluate_model(data, metrics=["accuracy"])
assert "accuracy" in results
assert results["accuracy"] == 1.0
def test_evaluate_model_invalid_data():
"""Test evaluation with invalid data."""
with pytest.raises(ValueError):
evaluate_model([])
Project Structure
lmmvibes/
├── lmmvibes/ # Main package
│ ├── __init__.py
│ ├── evaluation.py # Core evaluation functions
│ ├── data.py # Data loading and processing
│ ├── metrics.py # Evaluation metrics
│ ├── visualization.py # Plotting and visualization
│ ├── config.py # Configuration management
│ └── utils.py # Utility functions
├── tests/ # Test suite
├── docs/ # Documentation
├── examples/ # Example scripts
└── requirements.txt # Dependencies
Adding New Features
1. New Metrics
To add a new evaluation metric:
- Create the metric class in
lmmvibes/metrics.py
- Inherit from the
Metric
base class - Implement the
compute
method - Add tests in
tests/test_metrics.py
- Update documentation
2. New Data Formats
To add support for new data formats:
- Add format detection in
lmmvibes/data.py
- Implement loading/saving functions
- Add validation logic
- Write tests
- Update documentation
3. New Visualization Types
To add new visualization types:
- Add plotting functions in
lmmvibes/visualization.py
- Follow the existing API patterns
- Add configuration options
- Write tests
- Update documentation
Bug Reports
When reporting bugs, please include:
- Environment: Python version, OS, package versions
- Reproduction: Steps to reproduce the issue
- Expected vs Actual: What you expected vs what happened
- Error Messages: Full error traceback
- Minimal Example: Code that reproduces the issue
Feature Requests
When requesting features, please include:
- Use Case: What problem does this solve?
- Proposed Solution: How should it work?
- Alternatives: What other approaches have you considered?
- Implementation: Any thoughts on implementation?
Code Review Process
- Automated Checks: All PRs must pass CI checks
- Review: At least one maintainer must approve
- Tests: All tests must pass
- Documentation: Documentation must be updated
- Style: Code must follow style guidelines
Release Process
For Maintainers
- Update Version: Update version in
setup.py
- Update Changelog: Add release notes
- Create Release: Tag and create GitHub release
- Publish: Upload to PyPI
Version Numbers
We use semantic versioning (MAJOR.MINOR.PATCH):
- MAJOR: Breaking changes
- MINOR: New features (backward compatible)
- PATCH: Bug fixes (backward compatible)
Getting Help
- Issues: Use GitHub issues for bugs and feature requests
- Discussions: Use GitHub discussions for questions
- Documentation: Check the docs first
- Code: Look at existing code for examples
Recognition
Contributors will be recognized in:
- GitHub contributors list
- Release notes
- Documentation acknowledgments
Next Steps
- Check out the Testing Guide for detailed testing information
- Read the API Reference to understand the codebase
- Look at Basic Usage for usage examples