Python SDK

AI Content Generation

Generate social media content using AI providers

Experimental

This module is experimental and provides only basic functionality. For advanced use cases (multiple providers, streaming, tool calling), consider using ai-sdk-python or your preferred AI library directly with Late.

The AI module provides a simple interface for generating social media content using OpenAI. Install the AI extra to use this feature.

pip install late-sdk[ai]

Quick Start

from late import CaptionTone, Platform
from late.ai import ContentGenerator, GenerateRequest

# Initialize with OpenAI
generator = ContentGenerator(
    provider="openai",
    api_key="sk-...",       # Or set OPENAI_API_KEY env var
    model="gpt-4o-mini",    # Optional: gpt-4o, gpt-4-turbo, o1-mini, etc.
)

# Generate content
response = generator.generate(
    GenerateRequest(
        prompt="Write a tweet about the benefits of automation",
        platform=Platform.TWITTER,
        tone=CaptionTone.PROFESSIONAL,
    )
)

print(response.text)
# "Automation isn't just about efficiency—it's about freeing your team to focus on what truly matters: innovation, creativity, and building meaningful connections. #Automation #Productivity"

ContentGenerator Parameters

ParameterTypeDefaultDescription
providerstr"openai"AI provider to use
api_keystrNoneAPI key (or use env var OPENAI_API_KEY)
modelstr"gpt-4o-mini"Model to use (gpt-4o, gpt-4-turbo, o1-mini, etc.)

Generate Social Media Posts

Basic Generation

from late import CaptionTone, Platform
from late.ai import ContentGenerator, GenerateRequest

generator = ContentGenerator(provider="openai", api_key="sk-...")

# Using GenerateRequest
response = generator.generate(
    GenerateRequest(
        prompt="Write about our new product launch",
        platform=Platform.LINKEDIN,
        tone=CaptionTone.PROFESSIONAL,
    )
)

print(response.text)
print(f"Model: {response.model}")
print(f"Tokens used: {response.usage}")

Convenience Method

from late import CaptionTone, Platform

# Quick post generation
content = generator.generate_post(
    topic="remote work productivity tips",
    platform=Platform.TWITTER,
    tone=CaptionTone.CASUAL,
    language="en"
)

print(content)

Platform-Optimized Content

The generator adapts content based on the platform:

from late import CaptionTone, Platform

# Twitter-optimized (short, punchy)
twitter_content = generator.generate_post(
    topic="product launch",
    platform=Platform.TWITTER,
    tone=CaptionTone.CASUAL
)

# LinkedIn-optimized (professional, longer)
linkedin_content = generator.generate_post(
    topic="product launch",
    platform=Platform.LINKEDIN,
    tone=CaptionTone.PROFESSIONAL
)

Async Generation

import asyncio
from late import CaptionTone, Platform
from late.ai import ContentGenerator, GenerateRequest

async def generate_content():
    generator = ContentGenerator(provider="openai", api_key="sk-...")

    # Async generation
    response = await generator.agenerate(
        GenerateRequest(
            prompt="Write about AI in social media",
            platform=Platform.TWITTER,
        )
    )
    print(response.text)

    # Async convenience method
    content = await generator.agenerate_post(
        topic="AI trends",
        platform=Platform.LINKEDIN,
        tone=CaptionTone.INFORMATIVE
    )
    print(content)

asyncio.run(generate_content())

Streaming

Stream generated content as it's produced:

import asyncio
from late import Platform
from late.ai import ContentGenerator, GenerateRequest

async def stream_content():
    generator = ContentGenerator(provider="openai", api_key="sk-...")

    request = GenerateRequest(
        prompt="Write a detailed LinkedIn post about leadership",
        platform=Platform.LINKEDIN,
        max_tokens=500
    )

    async for chunk in generator.agenerate_stream(request):
        print(chunk, end="", flush=True)

asyncio.run(stream_content())

GenerateRequest Options

ParameterTypeDefaultDescription
promptstrWhat to generate
systemstrNoneSystem prompt/instructions
max_tokensint500Maximum output tokens
temperaturefloat0.7Creativity (0.0-1.0)
platformstrNoneTarget platform for optimization
tonestrNoneWriting tone
languagestr"en"Output language
contextdict{}Additional context

Tones

Use the CaptionTone enum or string values:

from late import CaptionTone, Platform

response = generator.generate(
    GenerateRequest(
        prompt="Write about our product",
        platform=Platform.TWITTER,
        tone=CaptionTone.PROFESSIONAL,  # Or just "professional"
    )
)
EnumStringDescription
CaptionTone.PROFESSIONAL"professional"Business-appropriate, formal
CaptionTone.CASUAL"casual"Friendly, conversational
CaptionTone.HUMOROUS"humorous"Fun, witty
CaptionTone.INSPIRATIONAL"inspirational"Motivational, uplifting
CaptionTone.INFORMATIVE"informative"Educational, fact-focused

Example with All Options

from late import CaptionTone, Platform

response = generator.generate(
    GenerateRequest(
        prompt="Write about sustainable technology",
        system="You are a tech journalist writing for a professional audience.",
        max_tokens=300,
        temperature=0.8,
        platform=Platform.LINKEDIN,
        tone=CaptionTone.PROFESSIONAL,
        language="en",
        context={"company": "TechCorp", "industry": "cleantech"}
    )
)

GenerateResponse

FieldTypeDescription
textstrGenerated content
providerstrProvider used (e.g., "openai")
modelstrModel used (e.g., "gpt-4")
usagedict | NoneToken usage stats
finish_reasonstr | NoneWhy generation stopped

Use with Posts

Combine AI generation with post creation:

from late import Late, CaptionTone, Platform
from late.ai import ContentGenerator, GenerateRequest

client = Late(api_key="late_api_key")
generator = ContentGenerator(provider="openai", api_key="sk-...")

# Generate content
response = generator.generate(
    GenerateRequest(
        prompt="Announce our Black Friday sale - 50% off all plans",
        platform=Platform.TWITTER,
        tone=CaptionTone.CASUAL
    )
)

# Create post with generated content
post = client.posts.create(
    content=response.text,
    platforms=[{"platform": Platform.TWITTER, "accountId": "tw_123"}],
    publish_now=True
)

print(f"Posted: {post['post']['id']}")

Multi-Platform Content

Generate platform-specific versions:

from datetime import datetime, timedelta
from late import Late, CaptionTone, Platform
from late.ai import ContentGenerator, GenerateRequest

client = Late(api_key="late_api_key")
generator = ContentGenerator(provider="openai", api_key="sk-...")

topic = "our new AI-powered features"

# Generate for each platform
platforms_content = {}
for platform in [Platform.TWITTER, Platform.LINKEDIN, Platform.INSTAGRAM]:
    response = generator.generate(
        GenerateRequest(
            prompt=f"Write about {topic}",
            platform=platform,
            tone=CaptionTone.PROFESSIONAL
        )
    )
    platforms_content[platform] = response.text

# Create multi-platform post with custom content
post = client.posts.create(
    content=platforms_content[Platform.TWITTER],  # Default
    platforms=[
        {
            "platform": Platform.TWITTER,
            "accountId": "tw_123",
        },
        {
            "platform": Platform.LINKEDIN,
            "accountId": "li_456",
            "customContent": platforms_content[Platform.LINKEDIN]
        },
        {
            "platform": Platform.INSTAGRAM,
            "accountId": "ig_789",
            "customContent": platforms_content[Platform.INSTAGRAM]
        },
    ],
    scheduled_for=datetime.now() + timedelta(hours=1)
)

Provider Configuration

OpenAI

generator = ContentGenerator(
    provider="openai",
    api_key="sk-...",  # Or set OPENAI_API_KEY env var
    model="gpt-4o",    # Default: gpt-4o-mini
)

Available models: Any OpenAI chat model (gpt-4o, gpt-4o-mini, gpt-4-turbo, o1-mini, etc.)

Custom Provider

Register a custom AI provider:

from late.ai import ContentGenerator, AIProvider, GenerateRequest, GenerateResponse

class MyCustomProvider:
    @property
    def name(self) -> str:
        return "custom"

    @property
    def default_model(self) -> str:
        return "my-model"

    def generate(self, request: GenerateRequest) -> GenerateResponse:
        # Your implementation
        return GenerateResponse(
            text="Generated content",
            provider=self.name,
            model=self.default_model
        )

    async def agenerate(self, request: GenerateRequest) -> GenerateResponse:
        return self.generate(request)

# Register and use
ContentGenerator.register_provider("custom", MyCustomProvider)
generator = ContentGenerator(provider="custom")

API Reference

ContentGenerator

ContentGenerator(
    provider: str = "openai",
    **provider_kwargs
)
ParameterTypeDefaultDescription
providerstr"openai"Provider name
**provider_kwargsProvider-specific args (api_key, model, etc.)

Methods

generate(request) - Generate content synchronously

agenerate(request) - Generate content asynchronously

agenerate_stream(request) - Stream content asynchronously

generate_post(topic, platform, tone, language) - Quick post generation

agenerate_post(topic, platform, tone, language) - Async quick generation