Skip to content

CoAL

Accelerate your solution development with the CosmoTech Acceleration Library (CoAL).

Overview

The CosmoTech Acceleration Library (CoAL) is a Python library that provides pre-built functions and CLI tools to accelerate solution development. It offers standardized interfaces for common operations like data handling, API interactions, and external service integrations.

What is CoAL?

CoAL provides four main components: - Core Library: Base functionality and utilities - CSM Data CLI: Command-line interface for data operations - Orchestrator Plugins: Templates and extensions for CSM-ORC - Translation: Internationalization support

CSM Data CLI

Available Commands

The csm-data CLI provides commands for common operations: - API Operations: Interact with CosmoTech APIs - Data Management: Handle datasets and files - Storage Operations: Work with Azure, AWS, and other storage - Database Operations: PostgreSQL, ADX, SQLite operations

[Placeholder: List of main csm data commands]

Command Groups

Based on the meetings, key command groups include: - adx: Azure Data Explorer operations - azure-storage: Azure Blob Storage operations - s3: AWS S3 operations (used for Dell ECS integration) - postgresql: Database operations - API commands: Direct platform API interactions

Data Store

SQLite Integration

CoAL provides a local SQLite database for: - Temporary Data Storage: Keep data between run template steps - Data Processing: SQL-based data manipulation - Performance: Fast local access vs. repeated file operations - Export: Dump to various target systems

[Placeholder: Example of data store usage in run template]

Common Data Store Operations

  1. Import: Load CSV, JSON, database tables into SQLite
  2. Transform: Use SQL for data transformations
  3. Export: Output to PostgreSQL, ADX, files, etc.
  4. Query: Interactive SQL queries during development

Integration with Run Templates

Using CoAL Commands

Replace complex custom code with standardized commands: - Use csm data run-load-data instead of custom data fetching - Use data store commands for intermediate processing - Use export commands for final data output

[Placeholder: Before/after example showing custom code vs CoAL commands]

Template Acceleration

CoAL templates provide shortcuts for common operations: - fetch-data: Standard data download template - send-to-adx: Export results to Azure Data Explorer - validate-data: Common validation patterns

Cloud Service Integration

Azure Services

  • Blob Storage: File storage and retrieval
  • Data Explorer (ADX): Analytics database operations
  • Functions: Integration capabilities

AWS Services

  • S3: Object storage (including ECS compatibility for Dell)
  • Authentication: IAM and credential management

API Integration

  • CosmoTech API: Simplified platform API access
  • Authentication: Automatic token management
  • Error Handling: Robust error management and retry logic

Development Workflow

Fast CLI Loading

CoAL uses lazy loading: - CLI loads quickly by importing modules only when needed - Individual commands load their dependencies at runtime - Improves responsiveness for frequent CLI usage

Local Development

  • Test CoAL operations locally with same environment as production
  • Use data store for iterative development
  • Validate with real data before deployment

Library Organization

CoAL is organized into functional modules: - AWS: Amazon service integrations (S3) - Azure: Azure service integrations (ADX, Blob, Functions) - CosmoTech API: Platform API wrappers - Data Store: SQLite database operations - PostgreSQL: Database connectivity - Utils: Helper functions and utilities

Best Practices

Performance

  • Use data store for intermediate processing instead of files
  • Leverage SQL for data transformations
  • Use batch operations for large datasets
  • Cache frequently accessed data

Code Organization

  • Use CoAL commands instead of custom implementations
  • Standardize on CoAL patterns across projects
  • Document any custom extensions
  • Keep custom code minimal

Error Handling

  • Leverage CoAL's built-in error handling
  • Use proper logging for debugging
  • Implement appropriate retry logic
  • Handle network timeouts gracefully

Migration from Custom Code

When adopting CoAL: 1. Identify Patterns: Find operations that match CoAL capabilities 2. Replace Gradually: Migrate one operation at a time 3. Test Thoroughly: Ensure equivalent functionality 4. Update Documentation: Document CoAL usage patterns

Platform Evolution

CoAL evolves with the platform: - API Changes: CoAL adapts to new API versions - New Services: New integrations added over time - Backward Compatibility: Maintains compatibility across versions

Further Reading