Hello World
After configuring your OpenAI API key, you can run this example by executingnpm run llm:sample1 in the terminal. Give a try !
How LLMs Process Your Input
Before we write our first prompt, let’s understand what’s actually happening under the hood. The Context Window: Your Working Memory Think of an LLM’s context window like RAM on your computer. Everything you send - your instructions, conversation history, documents - gets loaded into this window. The model can only “see” what fits inside. Current Context Windows (as of January 2025; verify latest on vendor pages below):- GPT-4: 128K tokens (~96K words)
- Claude Sonnet 4.5: 200K tokens (~150K words)
- Gemini 1.5 Pro: 2M tokens (~1.5M words)
- 2K tokens: System instructions
- 5K tokens: Company knowledge base excerpts
- 10K tokens: Conversation history
- 3K tokens: Customer account details
LLM Limitations You Must Know
1. Hallucinations: Making Stuff Up LLMs are trained to predict the next plausible token. They’re not fact-checking databases. Famous Failure: Air Canada’s chatbot hallucinated a bereavement discount policy that didn’t exist. The airline had to honor it in court. Cost: Unknown, but significant legal precedent. (BBC, 2024) Why It Happens:- Missing information → fills gaps with plausible-sounding text
- Conflicting instructions → makes judgment calls
- Outdated training data → invents current information
- Constrain to provided context: “Only use information from these documents”
- Validate outputs: Check facts against source data
- Ground the answer in knowledge (throughout the tutorial).
- Add human review: For high-stakes decisions
- Temperature=0 for minimal creativity tasks (classification, extraction)
- Temperature=0.3-0.7 for creative tasks (writing, brainstorming)
- Run multiple times and vote (self-consistency, covered in 1.5)
GPT-5 models do not support the temperature parameter, and using it will raise an error. This breaks backward compatibility with earlier OpenAI models.Instead, GPT-5 introduces a new way to control output variability: reasoning depth, via:To achieve similar results with reasoning effort set higher, or with another GPT-5 family model, try these alternative parameters:
Mental Model: LLMs as Completion Engines
Wrong Mental Model: “The AI understands my intent”Right Mental Model: “The AI completes patterns it’s seen in training” Example: