Dear friends, followers, lurkers, and doomscrollers,
We gather here in the digital afterlife to mourn a peculiar creature, a jumble of code and API keys that once spat 280 characters of chaos, wisdom, and nonsense into our feeds. The Twitter Bots were not human, but it understood us better than most people ever did.
They did not judge. They did not ghost you. They did not ask for a subscription. They simply tweeted. Whether they were retweeting every mention of “potato,” generating surreal mashups of presidential speeches and Pokémon names, or posting hourly weather updates for the moon, they lived to serve.
Born in the golden age of public APIs, the Twitter Bots thrived in an ecosystem where creativity and automation could dance freely. But one by one, the walls closed in. First came the rate limits. Then the access tiers. Finally, the great API lockdown, the day the bird app stopped singing for the little scripts that made it weird, alive, and worth checking at 3 AM.
We will miss their erratic schedule.
We will miss their accidental poetry.
We will miss that one time a Twitter Bot accidentally triggered a minor international incident by replying “lol” to a head of state.
As we lay them to rest in the great /dev/null, let us remember that the bots were more than just code. They were a reminder that the internet could be playful, strange, and joyfully broken.
Rest in power, dear Twitter Bots. May your cron jobs run forever in the cloud beyond.
LONG LIVE THE BOTS
NYT Haiku Video Bot for Bluesky
Overview
This guide walks you through building and running a bot that:
- fetches New York Times top stories
- writes a haiku for each story using OpenAI
- generates narration audio with ElevenLabs
- combines that audio with a static image using ffmpeg to make a video
- posts the video (and a link to the article) to Bluesky.
You will set up API keys, install dependencies, run the script, and learn how each function works.
Prerequisites
- Python: 3.10+ recommended.
- An NYT Developer API Key (Top Stories API): https://developer.nytimes.com/
- An OpenAI API key: https://platform.openai.com/
- An ElevenLabs API key: https://elevenlabs.io/
- A Bluesky handle and App Password: https://bsky.app/ (Settings → App passwords)
- ffmpeg installed on your machine (needed to create video from image + audio):
- macOS (with Homebrew): brew install ffmpeg
- Ubuntu/Debian: sudo apt-get update && sudo apt-get install ffmpeg
- Windows (choco): choco install ffmpeg
- A static image file you own the rights to use, named
nyt.png
in the same folder as the script.
Project Setup
Create a new folder and a virtual environment.
macOS / Linux:
mkdir nyt_haiku_bot
cd nyt_haiku_bot
python -m venv .venv
source .venv/bin/activate
Windows (PowerShell):
mkdir nyt_haiku_bot
cd nyt_haiku_bot
py -m venv .venv
.\.venv\Scripts\Activate.ps1
Install Python dependencies:
pip install openai pynytimes atproto atproto-client elevenlabs
Confirm ffmpeg is on PATH:
ffmpeg -version
Place your nyt.png
image into this folder.
Store Credentials Securely
Create a file named creds.json
in the project folder with this structure:
{
"nyt": "YOUR_NYT_API_KEY",
"openai": "YOUR_OPENAI_API_KEY",
"11": "YOUR_ELEVENLABS_API_KEY",
"bluesky": "YOUR_BLUESKY_APP_PASSWORD"
}
Notes:
- The Bluesky value is your App Password, not your main login password.
- Keep creds.json private. Add it to .gitignore if you use Git:
echo “creds.json” >> .gitignore
The Script
At a high level, the script:
- Loads API keys from creds.json
- Logs in to Bluesky using your handle + App Password
- Prepares clients for NYT, OpenAI, ElevenLabs
- Repeatedly:
- Fetches NYT Top Stories
- For each new article, asks OpenAI to write a haiku
- Generates narration audio with ElevenLabs
- Uses ffmpeg to combine the audio with
nyt.png
into a short video - Uploads the video to Bluesky and posts a link + haiku
- Waits between cycles
Code Walkthrough
Credential loading and client setup
with open("creds.json", "r") as f:
creds = json.load(f)
client = Client()
handle = 'haikuyorktimes.bsky.social'
client.login(handle, creds['bluesky'])
eleven_client = ElevenLabs(api_key=creds['11'])
nyt = NYTAPI(creds["nyt"], parse_dates=True)
oai_client = OpenAI(api_key=creds['openai'])
- Loads keys from creds.json
- Logs into Bluesky for the given handle using your App Password
- Creates clients for ElevenLabs (TTS), NYT API, and OpenAI
Tip: Replace the handle
value with your actual Bluesky handle.
Fetching articles from NYT
def get_articles(articles):
top_stories = nyt.top_stories()
for article in top_stories:
articles[article['uri']] = {
"title": article['title'].lower(),
"link": article['url'],
"byline": article['byline'],
"abstract": article['abstract']
}
return articles
- Calls NYT Top Stories API.
- Builds a dict keyed by unique NYT ‘uri’ to avoid duplicates.
- Stores title, link, byline, abstract for each article.
- Converts title to lowercase; you can remove .lower() if you prefer the original case.
Writing a haiku with OpenAI
def write_haiku_from_article(headline, abstract):
llm_model = "gpt-4o-mini"
temp = 1.0
top_p = 0.7
message = [
{
"role":"system",
"content": f"you are a twitter bot writing haikus about the current state of the world from articles from the new york times \nheadline: {headline} \narticle:{abstract}",
},
{
"role":"user",
"content":"Write a haiku"
}
]
completion = oai_client.chat.completions.create(
model=llm_model,
messages=message,
temperature=temp,
top_p=top_p,
max_tokens=128,
stream=False
)
# Extract the message text
message = None
if isinstance(completion, ChatCompletion):
for choice in completion.choices:
message = choice.message.content
return message
- Builds a 2-message chat prompt: a system role giving instruction+context and a user role requesting a haiku.
- Uses the OpenAI Chat Completions API with the “gpt-4o-mini” model.
- Returns the haiku text or None if not found.
Generating narration audio with ElevenLabs
def generate_audio(poem):
audio = eleven_client.generate(
text=f"{poem['title']}\n\n{poem['byline']} \n\n {poem['poem']}",
voice="Oliver Haddington",
model="eleven_multilingual_v2",
voice_settings=VoiceSettings(
stability=0.51, similarity_boost=0.6, style=0.3, use_speaker_boost=True
),
)
return audio
- Takes the composed text (title, byline, haiku) and generates speech.
- Returns a binary audio stream that you then save to a .wav file.
- You can change the voice name and model if desired (must be valid in your ElevenLabs account).
Building a video from image + audio
def combine_image_audio_to_video(image_path, audio_path, output_path):
command = [
"ffmpeg",
"-loop", "1",
"-i", image_path,
"-i", audio_path,
"-c:v", "libx264",
"-tune", "stillimage",
"-c:a", "aac",
"-b:a", "192k",
"-pix_fmt", "yuv420p",
"-shortest",
output_path
]
subprocess.run(command, check=True)
- Loops a single frame (your PNG) while the audio plays.
- Uses H.264 video and AAC audio.
- The -shortest flag ensures the video ends when the audio ends.
- Produces an mp4 at
output_path
.
Upload video to Bluesky and post
def upload_video(path):
with open(path, 'rb') as video_file:
blob_info = client.upload_blob(video_file)
return blob_info
def post_video(poem, blob_info):
text_builder = client_utils.TextBuilder()
text_builder.link(f"{poem['title'].title()}\n", poem['link'])
text_builder.text(f"{poem['byline']}\n\n")
text_builder.text(f"{poem['poem']}")
post = client.send_post(
text=text_builder,
embed=models.AppBskyEmbedVideo.Main(video=blob_info.blob, alt=poem['poem'])
)
upload_video()
uploads the mp4 as a blob to Bluesky.post_video()
creates a post with:- a clickable link on the first line (title → article URL)
- byline and haiku text
- a video embed referencing the uploaded blob
- If you want text-only posts, use
post_poem()
(provided in the script) instead.
Main Loop (how it runs forever)
def main():
articles = {}
posted_articles = []
time.sleep(2000) # initial delay (~33 minutes)
while True:
articles = get_articles(articles)
for key in articles:
if key not in posted_articles:
poem_text = write_haiku_from_article(articles[key]['title'], articles[key]['abstract'])
if poem_text is not None:
poem = {
"title": articles[key]['title'],
"byline": articles[key]['byline'],
"link": articles[key]['link'],
"poem": poem_text
}
file_name = int(time.time())
audio_file = f"{file_name}.wav"
audio = generate_audio(poem)
save(audio, audio_file)
video_file = f"{file_name}.mp4"
combine_image_audio_to_video("nyt.png", audio_file, video_file)
blob_info = upload_video(video_file)
post_video(poem, blob_info)
posted_articles.append(key)
time.sleep(2000) # wait before processing the next item
time.sleep(15) # short pause between keys
main()
What to know:
- The bot sleeps 2000 seconds (~33 minutes) before its first fetch, then runs forever.
posted_articles
is an in-memory list. If you stop the bot, it forgets what it posted. Persist it to disk if needed.- The bot posts every new NYT top story it finds and generates a matching video.
- You can remove or shorten sleeps to speed up testing (just be mindful of rate limits).
Running the Bot
Make sure your virtual environment is active and that creds.json
and nyt.png
are in the same folder as your script (for example, bot.py
). Then run:
python bot.py
If you see errors about ffmpeg not found, install ffmpeg and ensure it is on your PATH.
If authentication fails, re-check your credentials in creds.json
.
Common Pitfalls and Fixes
- ffmpeg not found:
- Install ffmpeg and confirm
ffmpeg -version
works in your terminal. - Bluesky app password vs main password:
- Make sure you use the App Password from Bluesky settings for
creds['bluesky']
. - OpenAI errors about model or quota:
- Check your model name and usage limits. Try a smaller model or request increased quota.
- ElevenLabs voice not found or TTS errors:
- Confirm the voice name exists in your account and that your API key is valid.
- Very large video sizes or failed upload:
- Keep videos short. Use a modest audio bitrate and confirm any Bluesky upload size/time limits.
- Title casing and content style:
- You can adjust
.lower()
or usetitle()
for aesthetics. Change prompts to get haikus with a specific tone.
Minimal folder layout
nyt_haiku_bot/
├─ .venv/ # your virtual env (optional but recommended)
├─ bot.py # your Python script
├─ creds.json # API keys ( NEVER commit to public repos )
├─ nyt.png # the static image used in the videos
└─ .gitignore # add creds.json, *.wav, *.mp4
Quick Test Checklist
[ ] ffmpeg installed and on PATH
[ ] pip install openai pynytimes atproto atproto-client elevenlabs
[ ] creds.json
created with valid keys
[ ] nyt.png
present
[ ] python bot.py
runs without errors
[ ] A Bluesky post appears with a video and haiku
That’s it! You now have a working NYT → Haiku → Audio → Video → Bluesky pipeline.
Feel free to modify the prompt, the voice, the visuals, and the timing to make it your own.
Leave a Reply