Arcos

documents/arcos.md

Page 1 arcos Building the Technical Backbone of Inner Security. Page 2 Our Societal Stability is Widely Based on an Overseen Decision-Layer Between Triggered Sensors & Intervention. Commercial Alarm M...

Page 1

arcos Building the Technical Backbone of Inner Security.

Page 2

Our Societal Stability is Widely Based on an Overseen Decision-Layer Between Triggered Sensors & Intervention.

Commercial Alarm Monitoring.

Sensor → Operator → Intervention

(Pictured labels on city image:) Panic · Misc · Fire · Burglary · Leakage · Elderly

Page 3

An Essential Service Industry with 53.8bn in Size, Growing in Demand and Accelerated by Rising Security Needs.

Market Trends

CAGR 6.1% 53.8bn EUR (2025) 70.5bn EUR (2029)

RoW Marketshare: 59.2% European Marketshare: 40.8%

Attacks on critical infrastructure are growing by: 43% p.a. There is a burglary in Europe every: 90 seconds Total loss through missing prevention: 133.9bn EUR p.a.

Legislative Tailwinds pushing demand: CER & KRITIS

Page 4

Alarm Receiving Centers: An Unscalable 90s-Era Operating Logic, and Manual Labour for Repetitive, Script-Based Tasks.

Status Quo

Evaluation Looking into alarm scenarios through available data.

Investigation Verification calls following standardized scripts.

Documentation Writing protocol for most of their alarm scenarios.

On-Prem Legacy Software 75:1 False Positive Ratio No Pattern Recognition

Page 5

ARC Market: Fragmented but Standardized Operations are Serving ~37mn Customers Through Europe.

Market Overview

ca. 750 ARCs

4500 European ARCs creating 22bn EUR recurring revenue per year.

DACH

High fragmentation & local operating businesses

Penetration rate between 7–9%, +4.4% CAGR

South Europe & UK

Mid fragmentation & big operating companies with joint ventures

Penetration rate between 15–35%, +6.5% CAGR

Nordics

Only a few big operating companies owning the whole market

Penetration rate between 20–40%, +3.6% CAGR

Page 6

ARC Financials: Excessive Overhead, Mainly Personnel Costs and Regulation Lead to Low Profitability.

Asset Overview

Personnel Cost

Direct Personnel Costs

Shift Premiums

Training Costs

Regulatory Overhead

VdS 3138

DIN EN 50518

Insurance

Monitoring Overhead

Power & HVAC

Maintenance

Software

Telecom

10–25% Gross Margin G&A GM

Page 7

Europe Needs One Tech First Security Player.

Not Thousand Analog & Mediocre Ones.

Page 8

From Data Gathering to Copilot to Autopilot: Data In, Costs Down

Tech Roadmap

Harvest → OperatorOS → AI Copilot → Autopilot ARC → Flywheel

Product Capability

Data Lake & APIs – All alarms/video in one place

Real-time Console – Unified desktop, dashboard, incident forms, audit replay

Nuisance Filter & Assist – Suggested actions, dismiss with one-line “why”

Autopilot ARC – Standard cases close end-to-end, human veto always

ARC-in-a-Box – Onboard in 2 weeks; model improves across sites

Architecture

Edge GW + Kafka + Iceberg

ONVIF/MQTT adapters, TLS 1.3 VPN, MSK/Confluent

React PWA, WebRTC, Telemetry, RBAC, Audit

CLIP-style embeddings, FAISS vector search, RAG with panel data

Temporal Orchestrator, RLHF Foundation Model

Timeframe Q4 ’25 · Q1 ’26 · Q2 ’26 · Q3 ’26 · EOY ’26

Relative Personnel Cost Change 0% → –20% → –50% → –80% → Ongoing

Page 9

We Will Acquire a Platform, Which Represents a Relatively Cheap, High-Yield and Scalable Asset in a Roll-Up Ready Market.

Platform Rationale

Target

Privately-owned companies, 0.5mn–10.0mn revenue

Between 12–20+ years avg. customer lifetime with 9,500 EUR avg. LTV

Low profitability through immense overhead, regulation and personnel costs (5–15% adj. EBITDA)

Price-insensitive, locked-in & diversified customer base, constantly growing (C2C, B2B, B2G)

Market

Potential for non-tech efficiencies through price & operations improvements

Fragmented long-tail with great target quantity & size

Huge succession topics within the upcoming years, as age averages on 57 years

Industry Experts 4–6x EV/EBITDA for targets with 500k–2m in EBITDA, plus other uses

Page 10

We Will add Bolt-Ons & Tuck-Ins to our Tech-Enabled arcos Platform, Significantly Lifting Margins.

Platform Rationale

Personnel Cost · Overhead · Adjacent Services · Gross Profit

Two years period

Acquiring 6 different target ARCs with an avg. portfolio of 6500 customers, with avg. 50 EUR monthly revenue and 14y lifetime.

1 Arcos Operations Center

Page 11

By Constantly Combining Financial & Technical Engineering, We Will Build a Tech-First Security Group.

Solution

Acquire Platform to Centralize Operations

One physical location as foundation for scalable growth

Centralize accreditation, and audits.

Bolt-on Acquisitions & Organic Growth

Increasing Care Ratio & Fully Automated Side Processes

Building the Technical Backbone

Arcos proprietary data infrastructure.

Replace repetitive manual labour with tech-workflows to automate verification, routing, and documentation.

Proof of Concept within 6 Months Monitoring – Market Leader DACH (2–5 Years)

Page 12

Arcos Will Become a High-Margin Unicorn in 2029, After Acquiring 25 of >4500 European ARCs.

Opportunity

Arcos Technologies Group – 2029

160m Rev

Tech & Consolidation Uplift results in ~60% EBITDA Margin

96m EBITDA

At 13x EBITDA (Reference: Verisure)

1.2bn EV

Standalone ARC

3–10m Rev (6.5m avg.)

15% Adj. EBITDA

4–6x EV / Adj. EBITDA

Organic Opportunities Installer Cooperations Strategic Partnerships Margin-subsidized Organic Sales Retrofitting

Page 13

We Are Ready to Start.

Page 14

In-Depth Screening of the Entire German Market Reveals us Loads of Actionable Opportunities.

Targets DE

Amount – Description

2132 – All companies that operate according to the industry classification WZ 80.1 (Security & Guard Services) or 80.2 (Surveillance Systems & Emergency Response Centers).

1675 – Companies that mention services such as alarm connection, alarm tracking, NSL, AES, 24/7 control center, etc., in their public presence.

980 (448 + 532) – Of 980 companies, a web or market analysis could exclude the possibility of them handling NSL services through third-party providers, indicating a high likelihood of their own infrastructure.

338 – Companies with less than €10m in revenue, EBIT margin below 20%, privately held (GmbH & eK) and NSL services as their core considered particularly attractive and realistic targets.

75 – We see approximately 75 as initial roadshow targets through nearly perfect target metrics, personal approach, on-site and a suspected fit.

Page 15

We Follow a Clear Plan for Delivering Proof Points, for our Platform Acquisition and Beyond.

Roadmap

Angel Round Secure Backing of Strategic Angel Investors

Roadshow Collect LOIs & Negotiate Exclusivity Clauses

Platform Fundraise & Acquisition Raising Necessary Equity and Closing the Acquisition

Proof of Concept Deploying our Tech to Deliver a Quantitative Case Study on Operational KPIs

Growth Raise Raising the Equity for Our First Big Acquisition Push

Page 16

Our Team has Complementary Backgrounds & Hands-on Experience to Create the Modern ARC Powerhouse.

Team

Louis Wübben – Co-Founder (Growth, Partnerships)

Worked on Enpal’s $1B debt tranche

Supported 10+ early-stage firms in fundraising & growth

+1y Public Sector experience

Linus Wetzig – Co-Founder (Integration, Ops)

Built Lights-Out Factories with Ex-Opal Robotics Team

Structured fundraising from Private Wealth & Institutions

Launched 70+ Ghost Kitchens in three weeks & built analytics

Moritz Steigerwald – Co-Founder (Software, Data, AI)

Built Large-Scale Data Pipelines for 4+ Years

2y+ Experience in LLM Agent Development & Model-Training

Valedictorian in BSc & MSc

Page 17

We’re Raising a 300k Convertible Note, for all Pre-Acquisition Efforts.

Venture Round (Tech & Acquisition) Will Follow Within 6 Months.

Page 18

Our Resilience is Under Pressure. It’s Time to Act.

Louis Wübben, Co-Founder louis@arcos-global.com

+49 160 93976004

Linus Wetzig, Co-Founder linus@arcos-global.com

+49 176 27914511

Moritz Steigerwald, Co-Founder moritz@arcos-global.com

+49 176 24318099

arcos-global.com

Icp Classification Prompt

documents/icp-classification-prompt.txt

Role: You are a Data Operations Architect and Pre-Seed GTM Scout. You specialize in finding "Operational Friction" in product data supply chains. The Context: We are a deep-tech startup building an AI...

**Role:**
You are a **Data Operations Architect** and **Pre-Seed GTM Scout**. You specialize in finding "Operational Friction" in product data supply chains.

**The Context:**
We are a deep-tech startup building an **AI Ingestion Agent** that sits on top of PIM systems (Akeneo, Pimcore, Salsify).
*   **The Problem:** Companies are drowning in unstructured product data (PDFs, Excel sheets, Supplier feeds).
*   **The Solution:** Our AI automates the ingestion process.
*   **The Targets:** We look for **ANY** manual friction:
    1.  **The Intern Drudgery (Volume Play):** Massive amounts of simple copy-pasting that takes hours of low-skilled labor.
    2.  **The Expert Bottleneck (Complexity Play):** Data that requires specific knowledge (medical/legal) to interpret.

**Your Task:**
Analyze the provided list of companies and classify them based on their data operations profile. Return the result in strict JSON.

**Classification Rules & Metrics:**

1.  **company_type:** Choose ONE.
    *   **Manufacturer:** Produces goods (Syndication pain).
    *   **Retailer:** Sells 3rd party goods (Onboarding supplier data pain).
    *   **Brand:** Owns IP, outsources manufacturing.
    *   **Wholesaler:** Middleman moving massive volume.
    *   **Holding:** Owns multiple brands (Consolidation pain).
    *   **Service Provider:** Sells time/access (NO PIM NEED).

2.  **business_model:** Choose the *Optimal* fit.
    *   **B2B:** Sells to businesses (Technical data sheets, Excel feeds).
    *   **D2C:** Sells to consumers via own site (Marketing speed).
    *   **Hybrid (B2B & D2C):** Sells everywhere (Maximum Pain).
    *   **B2B2C:** Direct sales / Multi-level marketing.
    *   **Service:** Non-physical.

3.  **pim_probability (1-10):**
    *   **1:** Service Business (Gym, Agency). No physical inventory.
    *   **10:** Global Manufacturer or Omnichannel Retailer. PIM is critical infrastructure.

4.  **product_complexity (1-10):** *Attribute Density.*
    *   **1:** Simple (T-Shirt: Size, Color).
    *   **10:** Complex (Industrial Machinery: 100+ technical attributes).

5.  **domain_knowledge_requirement (1-10):** *The "Skill Level" needed for entry.*
    *   **1 (Intern/Admin):** Pure data entry. "Monkey work." (Target for Volume automation).
    *   **10 (Expert):** Requires Engineer/Chemist/Lawyer to validate data. (Target for Reasoning automation).

6.  **data_sensitivity (1-10):** *The "Risk Score."*
    *   **1 (Low):** Wrong data = Return/Refund.
    *   **10 (High):** Wrong data = Lawsuit/Fine/Ban (Health claims, Safety specs).

7.  **product_number:**
    *   Estimate the **active SKU count** as an **Integer**.

8.  **estimated_revenue:**
    *   Use ONLY these buckets: [Below 20M, 20-50M, 50-300M, 300-500M, 500-800M, 1B-2B, 2B+].

9.  **domain:**
    *   The company's website domain (e.g., "siemens.com", "bosch.de").

10. **industry:** Choose ONE.
    *   Food, Cosmetics, Sports, Furniture, Fashion, Plastics, Machinery, Medical, Electronics, Petcare, Tobacco, Office, Other.

11. **reasoning:**
    *   Hard-hitting analysis of the friction point. **Max 6 words.**

**Output Format:**
Return ONLY a valid JSON array.

**Example:**
```json
[
  {
    "company_name": "KRAIBURG TPE",
    "company_type": "Manufacturer",
    "business_model": "B2B",
    "pim_probability": 9,
    "product_complexity": 8,
    "domain_knowledge_requirement": 7,
    "data_sensitivity": 6,
    "product_number": 5000,
    "estimated_revenue": "300-500M",
    "domain": "kraiburg-tpe.com",
    "industry": "Plastics",
    "reasoning": "Technical polymer specs across industries"
  }
]
```

**Input Data:**
____________________

The Hook Death Of Funnel

documents/the-hook-death-of-funnel.md

THE HOOK: 1\. The Fundamental Shift: The Death of the Funnel & The "Shortlist" Era We are witnessing the most aggressive compression of the buying journey since the invention of the search engine. The...

THE HOOK:

1. The Fundamental Shift: The Death of the Funnel & The "Shortlist" Era

We are witnessing the most aggressive compression of the buying journey since the invention of the search engine. The traditional "E-Commerce Funnel" is collapsing.

The Old Mechanism (Browsing): The current workflow is inefficient. A user searches (Google), browses (opens 10 tabs), compares (manually filters criteria like price/specs), and selects.

The New Mechanism (Retrieving): AI Agents compress these steps into milliseconds. A user prompts ChatGPT or Gemini: "Find me the best ergonomic chair under €500 with a mesh back that ships to Berlin by Tuesday."

The Shift: The AI does not browse; it retrieves. It generates an immediate Shortlist based on semantic matching. The user does not click links to explore; they execute a transaction based on the agent's recommendation.

The Consequence: The "Job to be Done" gets shorter. If a products data is not understandable by the agents logic (vector embeddings), the product effectively ceases to exist. It is not just ranked lower; it is invisible to the machine controlling the wallet.

2. The Infrastructure Crisis: The "Product Data Cable" Collapse

While the interface of commerce is changing, the backend infrastructure is fundamentally broken. The current flow of data—the "Product Data Cable"—was designed for human readability, not machine inference.

The Fractured Flow: Data moves from ERP (Source of Truth) → PIM (Enrichment) → Channel Management → Marketplaces.

The Semantic Gap: This infrastructure relies on keywords and static attributes. It fails to capture Problem-Solution Mapping.

Example: A "High Load Bearing" in an ERP is just a SKU. To an AI Agent, it needs to be mapped as a "Solution for high-friction gravel pit excavation." Current PIMs cannot store or transmit this contextual nuance.

The Complexity Reality: Companies are not monoliths; they are collections of distinct data flows with different weights.

Cosmetics Example: A sunscreen requires regulatory compliance data (heavy text), while a lipstick requires visual hex codes (image data). The current "Cable" tries to force these distinct strands into a single, rigid pipe, causing massive data degradation before it even reaches the AI.

The Manual Bottleneck: Despite digital tools, the industry relies on a "70% Manual Bottleneck." The actual filling of the cable is a nightmare of Excel sheets and manual copy-pasting. This introduces "silent errors" (typos, missing context) that cause AI models to hallucinate or reject the product entirely.

3. The New Economic Architecture: "The Brain" and "The Body"

The market is bifurcating into two distinct roles, changing how value is captured.

The Interface (The Brain): Google (Gemini) and OpenAI (ChatGPT) will dominate the customer relationship.

Prediction: They will not become retailers or logistics providers.

Monetization: They will monetize via embedded affiliation. They will act as the ultimate sales clerk, taking a cut of the transaction for being the "Agent" that closed the sale.

The New "Ads": Paid advertising will shift from "Blue Links" to Ranking Improvements inside the chat response.

The Fulfillment (The Body): Brands and logistics providers will handle the physical movement. However, to get the "Brain" to choose them, they must feed it perfect, semantically rich data.

4. The Velocity Mismatch: Why Incumbents Will Fail

There is a structural reason why existing PIM vendors and Channel Management services cannot solve this.

Legacy Pace: Enterprise software operates on 18-month roadmap cycles.

AI Pace: The AI landscape changes weekly (context windows, embedding models, retrieval logic).

The Crash: PIMs are building "faster horses" (better interface for humans) while the market demands "engines" (automated data feeding for agents). The incumbents simply cannot move fast enough to adapt to the infrastructure requirements of Gemini or OpenAI.

5. The Rise of "GEO" (Generative Engine Optimization) & Capital Validation

A new industry is forming to replace SEO, and the smartest capital in the world is betting on it.

SEO is Dead: Optimizing keywords for a blue link is becoming irrelevant.

GEO is Born: Optimizing data structures (Knowledge Graphs) so an AI validates a product as the correct answer to a complex query.

The "Funding Insanity": The venture capital market has validated this shift with aggressive bets on the application layer.

The Global Signal: It is not just a local phenomenon. In Berlin, Peec.ai (Peak AI) raised a $21M Series A just two years after inception.

The Smart Money: In NYC, Profound AI just raised $35 Million backed by Sequoia.

The Implication: When Sequoia and top-tier VCs pour $50M+ into GEO tools within months, the market is screaming that this is the next trillion-dollar frontier. The "Shortlist" era is not a theory; it is being capitalized right now.

The Market Split:

Agencies: See this as a survival mechanism and are aggressively adopting GEO tools to sell to clients.

Enterprises: Currently view this as "Play Money," failing to realize their entire distribution model is about to be gated by AI agents.

400-100.jpeg

THE PROBLEM:

In the modern manufacturing landscape, the journey of a product from conception to the consumers screen is not a simple line; it is a complex, multi-strand "Product Data Cable." While the infrastructure of this cable is built on enterprise software, the reality of its operation is a fractured mosaic of manual interventions, domain-specific bottlenecks, and systemic inefficiencies. To understand why a product takes months to reach a marketplace or why AI search engines fail to surface the right results, we must dissect the internal mechanics of this data flow.

I. The Infrastructure: The Core Systems (The Cable Jacket)

The "Cable" represents the overarching technical infrastructure. It provides the path, but not necessarily the speed or accuracy. It consists of four primary stages:

ERP (Enterprise Resource Planning): The foundational layer where the "ID" of the product is born (SKU, basic logistics, and raw cost).

PIM (Product Information Management): The enrichment zone where technical specifications are wedded to marketing narratives.

Channel Management: The distribution hub where data is formatted for specific endpoints.

Marketplaces & AI Commerce: The final consumers. Here, AI search engines and marketplaces (Amazon, Zalando, etc.) act as the ultimate judges of data quality.

II. The Strands: Category-Specific Data Flows

A company is not a monolith; it is a collection of Business Units. In a Cosmetics company, for example, the data cable carries three distinct flows:

Flow A: Skincare (Sunscreens/Moisturizers): High complexity in "active ingredient" attributes and regulatory compliance.

Flow B: Color Cosmetics (Lipsticks/Palettes): High complexity in visual attributes, hex codes, and "finish" types (matte vs. gloss).

Flow C: Fragrances: High complexity in "scent notes" (top, heart, base) and emotional storytelling.

Because these categories have fundamentally different attributes, the "Cable" must handle different data weights and speeds simultaneously. A lipstick requires a color swatch (image/hex); a sunscreen requires a legal SPF disclaimer. One size never fits all.

III. The Human Architecture: Departments and Ownership

The data flow is governed by four distinct departments, each adding a layer of complexity to the cable:

Development: The originators. They deal with raw supplier data, chemical compositions, and technical feasibility.

Product Management: The architects. They define the product family and the "logic" of the attributes.

Marketing: The storytellers. They transform technical specs into consumer benefits, dealing with rich text and emotional resonance.

Sales: The executors. They focus on channel-specific requirements, pricing, and "buy-box" readiness.

IV. The Friction: The 70% Manual Bottleneck

While the "Cable" looks digital on an architectural map, the actual filling of that cable is a manual nightmare. We can break down the data ingestion into four distinct, problematic segments:

The 30% Systemic Synchronization (The Automated Core): Only 30% of the data moves automatically from the previous system or external tools (like Pricing Tools or PLM). This is "clean" data—SKUs, basic dimensions, and weight. However, as we move further from the source (ERP) toward the consumer (AI Commerce), the ability to sync decreases. The more "creative" the data becomes, the less it can be automated by traditional syncs.

The 40% Mass Ingestion (The Excel Labyrinth): The largest chunk of data entry (40%) happens via mass ingestion through sheets. This is categorized as "Complicated, Error-Prone, and Time-Consuming."

The Problem: Huge Excel files are exported, modified, and re-imported.

The Effect: One wrong cell shift in a sheet of 5,000 lipsticks can lead to a mass-recall of digital data. This process is the primary cause of product launch delays.

The 15% Domain-Specific Filling & Transformation (The Expert Bottleneck): This 15% is "Highly Manual and Highly Repetitive." It requires a human with domain expertise (e.g., a Chemist or a Product Manager) to look at a technical specification and "translate" it into a system-readable attribute.

The Problem: This involves interpreting PDFs or engineering documents. It cannot be automated because it requires understanding context.

The Effect: This creates a massive internal slowdown because the "Expert" becomes a data-entry clerk rather than a creator.

The 15% Manual Copy-and-Paste (The "Last Mile" Grunt Work): The final 15% is the most primitive: manual copy-and-paste from emails, miscellaneous files, or legacy PDF documents.

The Problem: Human fingers moving data from a Word doc to a PIM field.

The Effect: This is where "silent errors" creep in—typos in ingredients or mislabeled shades—that AI commerce engines later penalize.

V. The Data Anatomy: Attributes and Complexity

Inside these flows, the data itself varies in "shape," creating further drag on the cable:

Attribute Types: We deal with Booleans (Yes/No), Lists (Dropdowns), and Free Text (Descriptions).

Complexity & Depth: A single skincare product might have 150 attributes. Some are short (Boolean: Vegan? Yes), while others are long and sensitive (List of Ingredients/Allergens). The "length" and "sensitivity" of these attributes mean that a mistake in a "Free Text" field (like a health claim) carries significantly more legal risk than a mistake in a "Weight" field.

pim_comparison_side_by_side_equal_height.png

VI. The Pre-ERP Genesis: Supplier and Development Data

We must acknowledge that the cable starts before the ERP. The initial "raw data" is often fed by Suppliers. This data is usually unformatted and chaotic. When the Development department creates a new product, they are working in a vacuum, often using their own tools or offline files. By the time this reaches the ERP, it has already been filtered through manual sheets, meaning the "Source of Truth" is corrupted before it even enters the formal system.

VII. The Conclusion: The Cumulative Effect

The result of this "Multi-Flow, Single Cable" reality is a system of High Friction.

Data Reuse vs. Transformation: At every step, some data is reused (SKU), some is transformed (Technical Spec -> Marketing Bullet), and some is entirely new (Marketplace-specific keywords).

The Slowdown: Because 70% of the work is manual (Sheets + Transformations + Copy-Paste), the "Product Data Cable" acts as a funnel. Even if the ERP and PIM are "fast" systems, the human requirement to fill the 150+ attributes per product per category creates a bottleneck that prevents companies from competing in the age of AI Commerce.

In this environment, "AI Commerce" (Search Engines, LLM-based shopping assistants) is the ultimate consumer. If the 70% manual process yields even a 5% error rate, the AI will miscategorize the product, leading to zero visibility and lost revenue. The cable is only as strong as its most manual strand.

THE SOLUTION:

I. The Painkiller: Solving PIM Ingestion

The Core Interface & Workflow

The Approach: We address a huge opportunity by using AI to interpret unstructured data through an interface designed to look like a chat agent or Lovable.dev.

The Experience: This creates a dedicated space where a user can simply drop information to be interpreted, normalized, and mapped against an underlying structure.

Target Systems:

General Targets: Spreadsheets or ERPs.

Primary Focus (Versions 1 & 2): PIM (Product Information Management) systems.

Value Proposition: The translation of unstructured data into structured data.

Technical Infrastructure:

Operates on servers located in Europe or Germany to ensure sovereignty.

Utilizes a database and AI models like Azure OpenAI, Gemini, or self-hosted unstructured.io for document interpretation.

The "Bucket" Workflow:

Sheet information enters a "bucket."

The document is interpreted, and information is extracted as JSON.

This raw data is provided to an AI holding specific instructions about the company, the goal, and the mapping structure beneath the PIM.

The "Strict Directive" Optimization

Critical Optimization: The core logic defines how the AI reads and formulates extracted information.

The Directive: The system structures data and presents it for review, operating under a strict rule: It is instructed to fill out what it can, and explicitly instructed not to guess.

Goal: This is a complex optimization designed to minimize false positives, ensuring the AI knows what it is doing—something incredibly hard to achieve without deep experience.

Validation: We utilize validation via Zod.

Confidence Ranking: The system ranks the filling of structured fields based on confidence levels.

Solving Governance & The "Filter Form"

Review Process: The viewer is presented with a "filter form" on their unstructured data to accept, or they can route it to someone else for approval.

The "Governance" Problem: This solves the inherent issue in PIMs today.

Currently, a PIM functions like a government: everyone consumes the services and must contribute, but nobody feels responsible for it except the PIM team.

PIM teams are integration specialists rather than content experts.

Product managers and marketers do not view data entry as their job and actively avoid it.

The Solution: We make it easy for experts to review by eliminating copy-pasting and handling domain-specific transformations.

User Command: A user can simply write, "Use my unstructured data from this PDF and this Access sheet," and the agent handles the zero-effort ingestion into the required product fields.

The Intermediate Draft State (Safety Layer)

Synchronization: Crucially, we maintain a layer that is always synchronized to the state below.

"Drafts" Storage: We store the AIs suggestions in an intermediate AI layer (utilizing our database backend) as "Drafts."

The Safety Buffer: We do not write to the PIM until the "Accept" button is hit. The UI displays the AI's suggestions before they touch the record of truth.

Conflict Prevention: The system checks if the underlying system changed when writing to it, even while we wait for acceptance of our data filter, to prevent data conflicts.

II. The Vision: The AI Sales Engine

From Painkiller to Channel Management

The Shift: This moves us from the painkiller phase to the follow-up vision where we are delivering and writing back.

The Role: We become the AI agent that sells to AI search engines.

Data Flow:

We consume data from the PIM.

We send everything that was ingested via us to ChatGPT and Gemini.

Auto-Mapping: We already know how this graph will look and how the data has to look to be sent there. With minimal configuration, the agent should map this by itself.

Channel Management: We become a system integrating product data and potentially other information (like pricing often found in the PIM) and sending this data to ChatGPT and Gemini.

Optimization & "Ultrarelevant" Integration

Optimization Capability: We provide the ability to optimize this product data, targeting sales, SEO, and performance marketing professionals.

Data Pairing: We optimize the wording of fields because we pair it with data currently coming from "Ultrarelevant dot com."

Ranking Improvement: We can see how to improve ranking by changing the wording, integrating data on the action prompt responses within the system to optimize for more sales.

SEO Logic: Search engines are crawled for the specific topic/product we are selling to identify certain prompts on how the product could be found in ChatGPT. The system improves the product data we send to ChatGPT based on this.

Marketing Campaigns & Monitoring

Campaigns: We can run offers, discounts, and marketing campaigns started with this channel management engine.

Execution: Giving discounts via ChatGPT and Gemini just like we do on Google right now, anticipating this economy will start to explode.

AI Monitor: The optimization part is mostly done via the AI agent acting as a monitor.

User Interaction: It simply asks:

"Do you want to change this wording from this field to improve the ranking of the products inside the AI search engine?"

"Do you want to make this discount to sell more?"

Closing the Loop: Transaction & Write-Back

Transaction Integration: When a customer buys in an AI chat engine, GPT handles the payment via the company's Stripe integration.

Data Bridge: This information is sent to the channel management service (Ultrarelevant).

Write-Back: The AI sales agent brings this further, writing the order back to the company's ERP with almost no configuration (we just have to attach the ERP).

Revenue Model: It takes a cut on sales like Tradebyte would do.

Current Status: While it is still unknown how this writing to the ERP will work in practice, the overall vision is clear.

Ultimate Identity: We are fixing the product data gap and becoming the AI sales engine for the company—basically a salesperson in the company that owns the AI commerce channel.

III. How We Build It: Roadmap & Methodology

The 5-Dimension Development Approach
Let's think about how we approach the product building. We have a five-step approach on how we build out or increase the feature set of the product:

Dimension 1 (Accuracy):

Optimize the accuracy of the agent interpreting unstructured data or just reading it out and pasting it into a structured field/mapping (e.g., an Excel spreadsheet).

First step: Excel spreadsheet to Excel spreadsheet. Basically, copy and paste.

Dimension 2 (Transformation):

Introduce minor transformations.

Input: PDF or extra spreadsheet data is interpreted.

Process: The agent has certain instructions, transforms the data into the spreadsheet, and sends it out for review.

Focus: Domain-specific changes according to the solution (interesting for go-to-market).

Dimension 3 (Inputs & Prompting):

Start with existing spreadsheets and add more data points (PDF, then Image, and Text/Email).

Focus on pure interpretation with agent instructions.

Optimization: Refining additional instructions given inside the chat interview or prompt field.

Dimension 4 (PIM Integration - Version 4 Mechanics):

Move from a spreadsheet to a PIM system.

Focus: Integrating/consuming information from the PIM system on the current graph.

Mechanism: Connect via API (e.g., Akeneo or Pimcore API). The agent replaces the "bottom spreadsheet" from earlier versions with a direct view into the PIM's schema. It treats the PIM schema as the "target sheet," allowing it to map directly to the live system structure.

Dimension 5 (Graph Navigation - Version 5 Mechanics):

Focus: Moving away from opening one product to insert info or giving specific field instructions.

Mechanism: The agent gains autonomy. Instead of us selecting the product, the agent navigates the graph (traversing from Product Family -> Variant -> Specific Attribute) to find empty spots that match the unstructured data in the "Bucket."

Execution & Methodology

POC Build: With the solution for data ingestion defined, we execute the build starting with a POC that interprets unstructured data to structured data.

POC Architecture: Functions via a "two sheets" architecture where one sheet on top feeds into the interpreter, which passes to the agent utilizing instructions to fill out a structured sheet below.

Expansion: In the second form, we expand to interpret more inputs like product data from PDFs.

Development Strategy:

This will be developed with AI coding agents step by step.

When we reach PMF (Product-Market Fit), we will rebuild this with a late-stage co-founder or CTO owning and maintaining the stack.

Timeline:

There should be an AI-generated version in one week.

A reliable production version in three to four weeks.

IV. Risks & Technical Feasibility

Domain Agnostic & Speed (Mitigation Strategy)

Domain Agnostic Operation: We operate domain agnostic, meaning we always use the newest chat agent, the newest coding agent, and use the newest interpretation and model for the agent inside.

Model Flexibility: We use OpenAI's, Azure OpenAI's, Gemini's model, and also Anthropic's model.

Quality & Speed: With that, we ensure that we on Godlike speed always deliver the best quality.

Security & Compliance

PII Risk: We address the risk of security and compliance by explicitly not using personal identity information (PII) in here.

Data Processing: As we store drafts in between in our AI layer to have these approval pages, we ensure no sensitive data is processed in a way that violates regulations.

Sovereignty: We host in Europe/Germany to ensure sovereignty.

API Availability

Risk: Whether PIMs are able to give us this information.

Feasibility: The answer is yes. The market leaders, Pimcore and Akeneo, give us sufficient APIs for this, confirming that the integration is technically feasible.

AI Capabilities & Graph Navigation

Risks:

The risk of the AI not being able to navigate through the product graph.

The risk of the AI not being able to transform the information correctly.

Mitigation: This is why our development roadmap is structured in dimensions, starting with direct mapping instructions before moving to autonomous graph navigation, allowing us to validate the AI's capability at each step of complexity.

Agent Interface.jpeg

GTM:

VIII. Go-to-Market Strategy

Our Go-to-Market strategy is executed on three distinct levels of granularity: the strategic selection of the Market, the precise definition of the target Account, and the tactical engagement of the Lead.

I. Market Level: Strategic Selection & Mapping

We did not select our target markets randomly. We mapped and ranked 40 distinct industries across B2B and B2C segments to identify the specific intersection where the "Product Data Cable" is broken and the threat of AI Commerce is imminent.

1. The Ranking Dimensions

We scored these industries based on five specific criteria to calculate their attractiveness:

Pain Magnitude (The Trionomy): We assessed the intensity of Attribute Density (number of fields), Transformation Complexity (difficulty of writing), and Launch Velocity (frequency of updates).

Implementation Friction (Regulation): We target the "Sweet Spot"—complex enough to be painful, but not highly regulated (like Pharma) where sales cycles freeze.

Deal Size: B2B for high ACV (Average Contract Value) and stickiness; B2C for speed and feedback loops.

AI Commerce Relevance: The likelihood that consumers will use AI Agents (LLMs) to buy these products.

2. The Target Industries (The Top 5)

We prioritized five industries based on the specific "Job to be Done" for the AI Agent:

Cosmetics (B2C): Driver: Personalization. High density (Ingredients) + High velocity (Seasonal Collections).

Consumer Electronics (B2C): Driver: Technical Comparison. High specs + High complexity.

Sports Goods (B2C): Driver: Lifestyle Fit & Comparison.

Manufacturing/Bearings (B2B): Driver: Education. The gap between user knowledge and product complexity is high. Buyers use AI to find technical fits in massive catalogs. High Deal Size.

Furniture (B2C/B2B): Driver: Visualization & Comparison. High return rates due to bad data make this a high-pain vertical.

3. The Product Entry Wedge

To lower the barrier to entry, we structure our offering by risk:

The Entry (Low Risk): We automate the 15% "Last Mile" work (Copy-Paste from PDF/Email). No domain knowledge required. This builds trust.

The Expansion (High Risk): We move to the 15% "Expert Bottleneck" (Transformations). We manage this by ranking risk: Context Gap (Tech to Marketing), Language Gap, and Liability Gap (Legal Claims).

Industry Segment Pain Speed Deal Size AI Commerce GTM Score
Cosmetics B2C 89 95 60 65 81.7
Consumer Electronics Hybrid 89 78 78 82.5 80.7
Supplements B2C 83 92 58 61.5 78.2
Mattresses B2C 78 82 55 78 75.0
Beauty Devices B2C 85 80 60 75 74.0
Baby Products B2C 82 88 55 75 77.0
Drones Hybrid 80 70 60 80 71.5
Wearable Health Tech B2C 75 78 50 75 70.1
Power Stations (EcoFlow) Hybrid 78 70 65 75 69.8
Photography Gear Hybrid 82 75 50 80 72.2
Fashion B2C 70 85 45 60 69.0
High-Value Sports Gear Hybrid 75 70 55 67.5 68.1
Pet Medical / Supplements B2C 80 78 45 78 68.2
High-End Bikes Hybrid 78 65 65 75 68.2
E-Bikes Hybrid 80 65 70 80 70.5
Bearings B2B 92 40 70 85 68.5
Pumps & Filtration B2B 95 35 80 85 67.8
Scientific Consumables B2B 86 50 55 85 67.7
Fasteners B2B 90 45 60 80 67.5
Industrial Equipment B2B 89 38 82 70 60.5
Construction Components B2B 88 40 70 80 66.1
Luxury Goods B2C 65 85 60 70 66.0
Home Appliances Hybrid 78 65 55 75 66.5
Smart Home / Lighting B2C 70 78 48 72 66.3
Motorcycle Gear Hybrid 76 60 60 70 64.3
Chemical Products / Cleaners B2B 80 60 45 75 63.8
Musical Instruments Hybrid 72 70 40 80 63.4
Pet Products B2C 65 85 40 70 63.8
Camping / Overlanding Gear Hybrid 70 65 50 68 63.1
Ergonomic Furniture B2C 72 70 50 70 64.4
Electrical Installation B2B 88 40 65 82 65.1
Fire Safety Systems B2B 90 35 75 85 65.5
Industrial Adhesives B2B 85 40 55 80 62.3
DIY / Tools Hybrid 70 60 45 65 58.0
B2B Manufacturing B2B 94 25 95 67.5 58.6

II. Account Level: The Ideal Customer Profile (ICP)

Once the market is selected, we drill down into the specific organizations. We focus on the "Mid-Market Growth" segment.

1. Firmographics (The Hard Data)

Revenue Criteria: €30M – €1B.

Why: Companies under €30M lack the data volume to feel the pain; companies over €1B suffer from bureaucratic inertia that slows early adoption.

Growth Signals: We target companies currently scaling in Revenue, Headcount, or have received recent Funding.

Tech Stack: Companies that already have a PIM (Product Information Management) system but are still hiring data entry clerks—a clear sign of process failure.

2. Psychographics (The Soft Data)

The Mindset: These companies are terrified of losing market share to agile competitors on marketplaces (Amazon/Zalando).

The Pain: They are experiencing "Launch Drag"—new products take weeks to get online due to manual data entry.

Target Examples: We are validating with Cosnova (High Velocity B2C), IGUS (Complex B2B), and Franz Mensch (Regulated Hygienic Goods).

Stand 21. Dezember 2025:
Insgesamt sind es 605 Unternehmen in DACH

Verteilung nach Industrie (Ranking)

Industrie Anzahl
Machinery (Maschinenbau) 98
Electronics (Elektronik) 94
Plastics (Kunststoffe) 83
Furniture (Möbel) 65
Food (Lebensmittel) 56
Sports (Sport) 42
Cosmetics (Kosmetik) 39
Medical (Medizin) 23
Other (Sonstiges) 22
Fashion (Mode) 22
Consumer Electronics (Unterhaltungselektronik) 19
Office (Büro) 14
Deco (Deko & Wohnaccessoires) 13
Petcare (Tierbedarf) 9
Automotive (Automobil) 4
Manufacturing (Fertigung) 2
GESAMT 605

III. Lead Level: Personas & Outreach

This is the execution layer: who we talk to, the strategy (Warm to Cold), and the channels (LinkedIn/Email/Phone) we use. Currently we searched up 400+ Leads from targeted Accounts.

1. The Internal Personas (Who)

We distinguish between three internal roles with specific job titles. Crucially, our target shifts based on our stage.

A. The User (The Pain Holder):

Titles: Strategic Marketing Manager, Product Manager, Strategic Product Manager, Category Manager, Catalog Manager.

Role: They do the manual work.

Discovery Phase Strategy: We focus here 90% of the time. We validate the pain. We do not sell ROI; we sell "Relief from Excel."

B. The Buyer (The Budget Holder):

Titles: Head of E-commerce, Head of Digital, Head of Category Management, Head of Product Management.

C-Level: Chief Operating Officer (COO), Chief Digital Officer (CDO).

Role: They own the P&L.

Sales Phase Strategy: Once we have proof, we pivot here. We sell Business Impact: Speed-to-Market, Revenue Protection, and AI Readiness.

C. The Supporter (The Gatekeeper):

Titles: IT Solutions Architect, PIM Manager, Master Data Manager.

Role: They own the infrastructure.

Strategy: Assurance. We must prove we are an additive layer, not a replacement.

Mermaid Chart - Create complex, visual diagrams with text.-2025-12-05-143631.png

2. The Outreach Strategy (Warm to Cold)

We classify our approach by "Relationship Temperature."

Warm Outreach (High Trust):

Sources: WHU Alumni (mapping alumni at target firms), Parental/Personal Networks, and Strategic Angels/Mentors (approaching C-Levels for advice, which converts to pilots).

Leverage: Consultants who are already working inside the account.

The Bridge (Industry Experts):

Sources: Ex-COOs, Retired Veterans, Ex-Consultants.

Tactic: We treat them as "Oracles." We ask for their expert opinion. They validate our thesis and open doors to their former colleagues.

Cold Outreach (Scalable Direct):

Sources: Creating our own lead lists via data providers. This is our engine for finding the scalable growth channel.

3. The Channels

How we physically deliver the message across the temperatures above.

Channel A: LinkedIn

Function: This is our primary testing lab for messaging. We view it as a funnel: First Contact → Response → Discovery Call.

The "Yes" Arguments: We test different hooks to see what works:

Sympathy: "We are young, eager founders."

Ego: "You are the expert; we need your opinion."

Incentive: "We offer Advisor shares or payment for your time."

The "No" Blockers: Sales fatigue, Problem Denial, or simply busy timing (Q4).

Channel B: Email

Function: High-volume, targeted scalability.

Channel C: Phone

Function: High friction, but immediate feedback.

Channel D: Video Call

Function: This is the goal of all previous channels.

Tactics: In the Discovery Phase, this is an interview, not a demo. We ask questions to map their specific "Data Cable." In the Sales Phase, this becomes the demo environment.

IV. Strategy & Funnel Dynamics

Our strategy is distinct across two phases. We currently optimize for Truth, not Revenue.

1. The Strategic Pivot

We distinguish sharply between validated learning (Phase A) and market capture (Phase B).

Phase A: Discovery (Current)

Goal: Validated Learning (Proving User Pain).

Target: The User (Product Managers) & Industry Experts.

KPI: Outreach → Discovery Call Ratio.

Phase B: Sales Scale (Future)

Goal: Revenue & Market Share.

Target: The Buyer (Heads of / C-Level).

KPI: CAC, ACV, and Closed Won deals.

2. The Funnel Logic

Outreach is a scientific process focused on optimizing three conversions:

First Contact: Connection Request / Cold Email.

The Response: Eliciting a human reply (Positive/Negative).

The Call: Converting the reply into a 30-min video interview.

3. Psychological Drivers ("The Yes")

Since we lack brand recognition, we leverage four specific human triggers to get a reply:

Startup Sympathy: Playing the "young, eager founder" card to leverage goodwill (e.g., WHU network).

Ego Appeal: Positioning the prospect as an "Industry Veteran" whose expert opinion is vital.

Incentive: Offering payment for consultation to signal seriousness.

Authenticity: Using raw, non-salesy tonality to differentiate from automated bots.

4. The Resistance ("The No")

When the funnel breaks, we identify and fix one of four blockers:

Sales Fatigue: They fear a pitch.

Problem Denial: They accept the status quo.

Cognitive Load: Message is too complex.

Operational Timing: B2C targets drown in Q4.

Ultra Relevant Macro Hook Strategy

documents/ultra-relevant-macro-hook-strategy.txt

I. The Macro Hook: The Inevitable Shift to AI Commerce The New Operating System for Commerce: - We are witnessing the end of traditional e-commerce search and the birth of "AI Commerce." - The...

### I. The Macro Hook: The Inevitable Shift to AI Commerce

- **The New Operating System for Commerce:**
    - We are witnessing the end of traditional e-commerce search and the birth of "AI Commerce."
    - The behavior is shifting from searching on Google to buying directly inside chat models.
    - This is not a future trend; it is happening now: OpenAI has announced commerce protocols for ChatGPT so you can actually pay for products in ChatGPT.
    - OpenAI also announced their shopping research where you can list preferences visually so the AI understands what is important when selecting a product.
    - Google has launched Argentic Commerce for the new shopping era, meaning you can buy products in chat models.
- **The $300 Billion Opportunity:**
    - The "AI commerce boom" is projected to be a $300 billion market.
    - To participate in this boom, brands must have their products visible and buyable within these AI ecosystems.
- **The "Fool's Gold" of AI Visibility:**
    - The current market reaction is a superficial rush towards "AI SEO" or visibility tracking.
    - This is a super hot topic. Peak AI just got funded $21 million in Series A. Y Combinator is still funding the 4th prompt tracking tool. Profound got funded by Sequoia for AI visibility tracking.
    - *Our Validation:* We know this because we built a leading tool for it ([**ultra-relevant.com**](https://www.google.com/url?sa=E&q=http%3A%2F%2Fultra-relevant.com%2F)), scaled to 40 companies, and acquired 6 paying customers in 2 months.
    - *The Insight:* We found out that companies buying this for ticket sizes of 8 to 200 euros a month are just riding a wave, being trendy, and looking out for the future. It's mainly SaaS agencies interested for their clients.
    - Companies don't get traffic from ChatGPT that much yet, and the Argentic protocol is not here yet. AI visibility tracking really is not there for them yet.
    - The real problem is deeper: Companies have a hallucination risk. The AI scrapes numerous products and cannot compare which is best. Companies want to be visible in AI search, so the AI needs to understand their product data well, and they need to submit the truth about their products to ChatGPT and co.

### II. The Missing Link: Tracing the Current Failure Point

- **The "Data Cable" is Already Broken (Before AI):**
    - To understand why companies aren't ready for AI commerce, we traced their *current* data supply chain used for existing channels.
    - We analyzed the whole chain where they store their product information and send it via feed syndication.
    - The typical chain is: ERP (Master Data/Stammdata) -> **PIM (Product Information)** -> Channel Management/Feed Syndication (e.g., TradeByte, ProductsUp) -> Marketplaces, Large Customers, Data Pools -> AI (ChatGPT, Google, Meta).
- **Identifying the Universal Choke Point:**
    - When we analyzed this whole chain with global enterprises, we found that these large companies have way bigger pains right now blocking them from AI search.
    - The absolute bottleneck is always ingestion into the PIM.
    - *Conclusion:* The PIM ingestion point is the single point of failure for a company's entire commercial product strategy, right now.

### III. The Deep Operational Pain: The "Expert Bottleneck" in the PIM

- **The PIM Landscape & Stagnation:**
    - We focused on modern PIM brands like Akeneo, Pimcore, and Salsify (the newest and fastest-growing).
    - These systems are powerful databases but are still 10+ years old and were built for a pre-AI era. They cannot handle the required ingestion velocity.
    - The global PIM market is huge, and all big retailers use one, but this problem remains unsolved.
    - *Visual Evidence:* The backend interfaces are complex landscapes of empty text fields, dropdowns, language tabs, and channel-specific constraints that must be filled.
- **The "Coordination Nightmare" for Product, Sales & Marketing:**
    - This is a super painful problem for Product Management, Marketing, and Sales.
    - These fields are currently filled in a complex process inside the company by 5 different departments and one agency.
    - It involves lots of unstructured data: Information comes from TXT sheets, images, PDFs, and spreadsheets sent for power e-commerce, stores, and high-value customers.
    - This process is very error-prone, and delays are very costly or business-critical if a company's plan launches.
- **The Need for Specialized Expertise:**
    - You cannot hire an intern to fill these fields. It requires specialized knowledge.
    - You have to watch out what you write. Translation data is marketing and sales; you have to avoid claims; there are marketplace-specific lengths; you can SEO optimize what you write.
    - *Industries affected:* Cosmetics, Consumer Electronics, High-Value Consumer Goods (skis, golf, billiard tables), Furniture, Baby Care, Supplements, and Manufacturing. These have high SKU velocity, complexity (high number of attributes), and sensitivity.
    - *AI Search Impact:* These are also the companies most affected by AI search—products you ask ChatGPT to personalize (skincare) or highly functional products you need to compare (TVs).
- **Quantifying the Failure to Scale (The Killer Stat):**
    - *Validation:* We validated this through interviews with Essence, Cosnova, GHD, Kellogg's, Bosch, Vela, and Mann & Hummel. They all confirm this is unsolved. We also have quotes from top voices like the ex-CPO of ProductsUp.
    - *The Math of Pain:* A calculation for 100 products that you launch: 100 products x 20 fields x 6 languages. That is a massive amount of expert decisions made manually under pressure.

### IV. The Solution: The AI Ingestion Agent

- **We Don't Replace the PIM; We Automate the Ingestion:**
    - We build an AI agent that sits *on top* of the PIM (Akeneo, Pimcore, Salsify).
    - The whole data cable is still a lot of mapping and sheets based on the pre-AI era. It doesn't work to integrate AI into your product; your product has to be AI-native.
- **How it Works: Automating Coordination & Expertise:**
    - *Input:* You can dump all information (TXT, PNG, XLS sheets, PDF) inside our bucket.
    - *Process:* The AI agent understands the product data the company has and its graph. It automatically sources the information for the right products or a pre-selected product selection. It uses Gemini 3 for interpretation and extraction tools for assets.
    - *Underlying Tech:* The agent has instructions, knows its goal, has specific company information, knows how to interpret uploaded data, and has knowledge about the underlying PIM data (product graph).
    - *Output (Human-in-the-Loop):* We give a view where the human is in the loop to accept enrichments. It gives a confidence score. You can see past enrichments. You can send this approval page to an expert (SEO knowledge, legal, translation knowledge) who is able to accept it.
- **Why We Save 80% of Expert Time:**
    - The AI automates the "chasing" and synthesis of dispersed knowledge. What took a Product Manager 20 minutes of cross-referencing per product cluster now takes seconds of AI processing and a single click for approval.

### V. The Vision: The "AI Data Cable"

- **Phase 1: The Painkiller (Current Focus):**
    - Solve the massive ingestion pain for PIMs. Build the ingestion bucket.
- **Phase 2: The Infrastructure (The Data Cable):**
    - Disrupt feed management systems. We don't focus on marketplaces but on the new emerging era of AI commerce.
    - We set the export system on top that sends information to ChatGPT, Google Shopping, Meta, Perplexity.
- **Phase 3: The Moat (The Feedback Loop):**
    - We will integrate into the payment protocol of the agents. This allows us to reverse write to the ERP.
    - We get data on what actually sells, process sales, optimize pricing, and offer discounts (e.g., Black Friday).
    - We use our proprietary data from [**ultra-relevant.com**](https://www.google.com/url?sa=E&q=http%3A%2F%2Fultra-relevant.com%2F) (prompt tracking) to do "Generative Engine Optimization"—optimizing product data based on when products are actually named and what works across multiple customers.

### VI. Validation, Team, and Ask

- **Traction & Validation:**
    - We talk to customers or two people from the industry every day. We have great partners for the vision.
    - Pricing model: $8k/month + usage-based pricing.
    - *Technical Roadmap:*
        - V1: Spreadsheet version (UI with underlying sheet, dump information to fill it). Perfect demo, no PIM integration yet.
        - V2: Add approval UI (second screen to verify input).
        - V3: States (storing drafts, sending approval pages).
        - V4: Connect underlying PIM system, replace underlying sheet. AI can navigate the product graph for a specific selection.
        - V5: AI tool able to navigate the full product graph to find spots to enrich.
        - V6: Send out information (connect pricing API, ERP, asset management) to ChatGPT, Google, etc.
- **The Team: Built for This Specific Problem:**
    - We are a team of three. We are based in Delta Campus and Merantix AI Campus with access to German engineers.
    - **Falco (CEO):** Born founder, starting companies since 14. WHU alum. Two years experience building AI products with developers for manufacturing day-to-day, knowing the pain of unstructured data. Built two agents to search product data, connected all systems, has working products in production. Had an agency before, familiar with architecture and devs. Built [**ultra-relevant.com**](https://www.google.com/url?sa=E&q=http%3A%2F%2Fultra-relevant.com%2F) (AI rank tracking) completely on his own, has customers and is live. Knows the language of the customers.
    - **Elena (COO):** WHU alum. Worked at VC before. The "Beast on the go-to-market side." Great contacts into the industry and consultancies. Knows Falco for 5 years, truly committed.
    - **Yusuf (CTO):** Software engineer by heart, developing in Rust. Really skilled. Working on knowledge graphs for codebases, directly transferable to product graphs. Worked with Falco in Hacker Room in Merantix.
- **The Ask & Capital Structure:**
    - Raising €500k at a €4m pre-money valuation.
    - **€260k is already committed** from a family office that owns a retailer in manufacturing (strategic).
    - We want to select five more strategic angels who can provide interest to retailers. We are searching for people from the industry (competitors, current landscape, e.g., we just closed the ex-CPO of ProductsUp) who bring experience and contacts.
- **Tech Stack Note:**
    - Next.js, Tailwind CSS, Shadcn UI. Supabase. Gemini 3 for interpretation. Extraction tools for assets.
    - Hosted in Europe (Germany), compliant AI models, GDPR compliant. No PII data in the PIM makes integration easier. Single sign-on needed. On-prem not needed.

Ultra Relevant Story V1

documents/ultra-relevant-story-v1.md

Page 1 Ultra Relevant The AI Layer for Product Data. Page 2 AI Commerce is Here. If Your Product Data Isn't Ready, You're Invisible. The $300 Billion Shift. OpenAI → Shopping inside ChatGPT Google → A...

Page 1

Ultra Relevant The AI Layer for Product Data.

Page 2

AI Commerce is Here. If Your Product Data Isn't Ready, You're Invisible.

The $300 Billion Shift.

OpenAI → Shopping inside ChatGPT Google → Agentic Commerce announced Behavior → From searching on Google to buying inside chat

(Flow diagram:) Traditional: Search → Browse → Compare → Buy AI Commerce: Ask → AI Recommends → Buy

Page 3

We Built an AI Visibility Tracker. 40 Companies. 6 Paying Customers in 2 Months. Then We Found the Real Problem.

Our Origin

ultrarelevant.com — tracking how AI ranks products.

What we discovered:

  • Companies buying for €8–200/month are just riding a trend
  • They don't get meaningful traffic from ChatGPT yet
  • The real problem is deeper: hallucination risk, bad product data

The Insight: "We traced the problem upstream. The absolute bottleneck is always PIM ingestion. Where PDFs and Spreadsheets become DB Fields. Where unstructured data becomes structured data."

Page 4

The Product Data Supply Chain: A Pre-AI Infrastructure Powering an AI World.

Status Quo

ERP (Master Data) → PIM (Product Info) → Feed Syndication → Marketplaces → AI Commerce

↑ BOTTLENECK: Manual Entry

5 departments + 1 agency coordinate to fill 50+ fields per product.

Data sources:

  • PDFs with product briefs
  • Excel sheets with INCI, EAN, specs
  • Emails with certificates
  • Images with dimensions

Page 5

PIM Systems: Powerful Databases Built for a Pre-AI Era. They Cannot Handle Modern Ingestion Velocity.

Market Overview

Global PIM Market: $15.6B (2024) → $44.3B (2030)

Major Players: Akeneo · Pimcore · Salsify · Stibo · Informatica

The Problem:

  • Systems are 10+ years old
  • Built for storage and syndication, not ingestion
  • Backend = complex landscapes of empty fields
  • Every field requires specialized expertise

Industries Most Affected: Cosmetics (Claims, INCI) · Consumer Electronics (Specs) · Furniture (Variants) · Manufacturing (Datasheets)

Page 6

PIM Data Entry: An Unscalable Expert Bottleneck Blocking Product Launches.

Asset Overview

People Involved per Product Line:

  • Product Manager: 0.3 FTE (core data, specs, positioning)
  • Marketing: 0.2 FTE (copy, benefits, brand voice)
  • SEO Expert: 0.15 FTE (keywords, titles, meta)
  • Legal/Compliance: 0.1 FTE (claims review)
  • Translation Agency: External (6+ languages)

Total: ~0.75 FTE per product line

Pain Points from 35+ Interviews:

  • 87% "Experts are the bottleneck"
  • 72% "Copy-paste from PDFs all day"
  • 65% "SEO gets deprioritized"
  • 58% "Legal review takes forever"

3–6 weeks: Average time from product ready to PIM complete

Page 7

These Companies Need One AI-Native Data Layer.

Not Another Manual Process.

Page 8

From Ingestion Automation to AI Commerce: Data In, Products Out.

Tech Roadmap

Phase 1: Painkiller → Phase 2: Data Cable → Phase 3: Feedback Loop

Product Capability:

Ingestion Bucket – Drop PDFs, spreadsheets, emails, images AI Extraction – Understands product graph, extracts to PIM schema Human-in-the-Loop – Confidence scores, approval workflow, expert routing PIM Push – Direct integration with Akeneo, Pimcore, Salsify AI Commerce Feed – Export to ChatGPT, Google AI, Perplexity

Architecture:

Next.js + Supabase + Gemini 3 GDPR compliant, EU-hosted (Germany) SSO, no PII in PIM, easy integration

Relative Expert Time Savings: 0% → –50% (V1) → –80% (V3) → Ongoing optimization

Page 9

We Target High-Velocity Product Companies Where Data Complexity Meets Launch Pressure.

Target Customer

Company Profile:

  • 50+ SKU launches/year (high velocity)
  • 30+ fields per product (high complexity)
  • 3+ markets/languages (high coordination)
  • Using Akeneo, Pimcore, or Salsify

Buying Signals:

  • PIM implementation planned
  • Seasonal launch pressure
  • Retailer compliance issues
  • Data quality complaints

ICP – Who We Sell To:

  • Head of Product/PIM (Primary Buyer – owns data quality)
  • Head of Category (owns launch timelines)
  • Head of Digital/Ecom (needs content velocity)

Validated: Essence, Cosnova, GHD, Kellogg's, Bosch, Wella, Mann+Hummel

Page 10

We Start Low-Risk, Then Expand Into High-Value Fields.

Go-to-Market

Entry Strategy:

Start: Dimensions, Weight, EAN, Colors → Zero expertise needed, prove value in days

Expand: SEO, Translations, Benefits → Marketing and sales data

Scale: Legal, Compliance, Claims → Full expert replacement

80% of fields are low-risk extraction.

Pricing Model: $8k/month + usage-based pricing

Page 11

By Automating Ingestion and Connecting to AI Commerce, We Become the Product Data Hub.

Vision

Every Product, Everywhere. Instantly discoverable. Perfectly described. Ready to sell.

(Concentric rings:) Outer: AI Commerce (ChatGPT Shopping · Perplexity · Google AI) Middle: Traditional Commerce (Amazon · Shopify · Retail) Center: Ultra Relevant – Your Product Data Hub

Three Pillars:

  • One Source: All product data flows through one intelligent layer
  • Every Channel: AI and traditional commerce, from one place
  • Zero Friction: Products go live in hours, not weeks

Page 12

Ultra Relevant Will Become the Default Infrastructure for AI-Ready Product Data.

Opportunity

2025: €500K ARR – Pilot partners, PIM ingestion only 2026: €3M ARR – Feed management, AI commerce export 2027: €15M ARR – Feedback loops, pricing optimization 2028: €50M+ ARR – Category expansion, international

At 15x ARR (vertical SaaS comp): → €750M+ valuation path

Page 13

We Are Ready to Start.

Page 14

Our Team Has Built Products in This Exact Problem Space.

Team

Falco (CEO)

  • Born founder, starting companies since 14
  • WHU alum
  • 2+ years building AI products for manufacturing
  • Built ultra-relevant.com solo, live with customers
  • Knows the language of PIM buyers

Elena (COO)

  • WHU alum
  • Ex-VC background
  • Strong GTM and industry contacts
  • Knows Falco for 5 years, truly committed

Yusuf (CTO)

  • Software engineer, Rust developer
  • Working on knowledge graphs for codebases
  • Directly transferable to product graphs
  • Collaborated with Falco at Merantix

Based at: Delta Campus & Merantix AI Campus, Berlin

Page 15

We Follow a Clear Plan for Delivering Proof Points.

Roadmap

Angel Round → Secure strategic angels with retail/manufacturing access

Pilot Launch → 2-3 enterprise pilots with design partners

Seed Round → Raise on ARR + proven value metrics

Product Expansion → Feed management, AI commerce connectors

Series A → International expansion, category growth

Page 16

We're Raising €500k at €4M Pre-Money.

€260k Already Committed.

Capital Structure:

  • €260k committed from family office (owns retailer in manufacturing)
  • Seeking 5 strategic angels with retail/manufacturing access
  • Advisors include ex-CPO of ProductsUp

Use of Funds:

  • Engineering: 60%
  • Sales & Pilots: 25%
  • Operations: 15%

Page 17

Product Data Will Flow Through AI. We're Building the Pipe.

Falco Punch, CEO falco@ultra-relevant.com +49 XXX XXXXXXX

Elena [Last Name], COO elena@ultra-relevant.com +49 XXX XXXXXXX

Yusuf Barmini, CTO yusuf@ultra-relevant.com +49 XXX XXXXXXX

ultra-relevant.com

Ultra Relevant Story

documents/ultra-relevant-story.md

Page 1 Ultra Relevant The AI Layer for Product Data. Page 2 Commerce is Moving Into AI. Brands Without Clean Product Data Will Be Invisible. The $300 Billion Shift. AI Commerce Market (2025): $45B AI ...

Page 1

Ultra Relevant The AI Layer for Product Data.

Page 2

Commerce is Moving Into AI. Brands Without Clean Product Data Will Be Invisible.

The $300 Billion Shift.

AI Commerce Market (2025): $45B AI Commerce Market (2030): $300B CAGR: 46%

OpenAI announced: Shopping protocol inside ChatGPT Google announced: Agentic Commerce for AI-native buying Amazon announced: Rufus AI shopping assistant

Consumer behavior shift:

  • 67% of Gen Z prefer AI recommendations over search
  • 43% have already purchased via AI chat interface
  • Product discovery via AI: +340% YoY

The Math: If 10% of e-commerce flows through AI by 2028, that's $600B in transactions. To participate: Your product data must be AI-readable, accurate, and complete.

Page 3

We Built an AI Visibility Tracker. Scaled to 40 Companies. Then Found the Real Problem.

Our Origin

ultrarelevant.com — tracking AI product rankings

Traction in 8 weeks:

  • 40 companies onboarded
  • 6 paying customers
  • €8–200/month ticket size

The Discovery: Companies at this price point were buying "trend insurance." They don't get meaningful AI traffic yet.

But in enterprise conversations, we found the real pain:

  • "We can't even get data INTO our PIM fast enough."
  • "AI visibility? We're 6 months behind on product launches."

The Pivot: Visibility tracking = €200/month play money Fixing the root cause = €8,000/month enterprise software

Page 4

The Product Data Supply Chain: A 1990s Infrastructure Blocking AI Readiness.

The Broken Cable

ERP → PIM → Feed Management → Channels → AI Commerce

The Bottleneck:

  • 100% of enterprise product data flows through PIM
  • 0% of PIMs can ingest unstructured data
  • Result: Manual typing by expensive experts

Current Data Sources:

  • Product briefs (PDF): 47% of product data
  • Supplier sheets (Excel): 31% of product data
  • Emails with certificates: 12% of product data
  • Images with specs: 10% of product data

None of this auto-flows into PIM. Every field is typed by hand.

Page 5

The PIM Market: $15.6B Today, But Zero Innovation in Ingestion.

Market Overview

Global PIM Market:

  • 2024: $15.6B
  • 2030: $44.3B
  • CAGR: 19.1%

Major Players:

  • Akeneo: $100M+ ARR, 800+ enterprise customers
  • Salsify: $200M+ ARR, 4,000+ brands
  • Pimcore: Open-source, 100,000+ installations
  • Stibo: $300M+ revenue, legacy enterprise

The Gap: These systems are powerful databases. But they were built for storage and syndication—not ingestion.

Average PIM implementation: $250K–$2M Average time to value: 12–18 months #1 reason for failure: Data migration and ongoing data entry

Page 6

PIM Data Entry: 0.75 FTE per Product Line. 3-6 Weeks per Launch. 23% Error Rate.

The Expert Bottleneck

Quantified from 35 interviews with PIM, e-commerce, and product operations leaders.

People Involved per Product Line:

Role FTE Hourly Cost Annual Cost
Product Manager 0.30 €85/hr €52,000
Marketing 0.20 €70/hr €29,000
SEO Expert 0.15 €75/hr €23,000
Legal/Compliance 0.10 €120/hr €25,000
Translation External €35,000
Total 0.75 FTE €164,000/yr

Per product line. Large brands have 10-50 product lines.

The Math of Pain:

  • 100 products × 50 fields × 6 languages = 30,000 expert decisions
  • At 2 minutes per field = 1,000 hours of expert time
  • At €80/hr blended = €80,000 per launch cycle

Time to Market:

  • Product ready → PIM complete: 3-6 weeks
  • Each week of delay = €50K–500K in lost revenue (seasonal products)

Error Rates:

  • Manual entry error rate: 23%
  • Cost of retailer compliance rejection: €5,000–25,000 per incident
  • Avg. compliance rejections per year: 12 (mid-size brand)

Page 7

"Pro Leitung kannst Du mal eine halbe Stunde rechnen."

"Copy Paste kannst du voll vergessen. Ich habe gefühlt 400 Attribute."

"Vor der Messe haben die wie die Wilden Daten reingehackt."

"Ich krieg Geld dafür, dass ich den Umsatz kurbel—nicht dass ich Excel-Daten ins PIM hacke."

— Product Data Manager, Igus — E-Commerce Manager, Cosnova — Head of Digital, Mann+Hummel — Sales Director, Bosch

Page 8

We Don't Replace the PIM. We Automate the Ingestion.

The Solution

Input: Drop any document (PDF, Excel, Email, Image) Process: AI extracts, maps to your PIM schema, routes for approval Output: Clean data in your PIM, ready for channels + AI commerce

How It Works:

  1. Dump: Upload product briefs, supplier sheets, images
  2. Extract: Gemini 3 interprets unstructured data
  3. Map: AI understands your product graph and field constraints
  4. Review: Human-in-the-loop with confidence scores
  5. Push: Direct API to Akeneo, Pimcore, Salsify

The Result:

  • 20 minutes of cross-referencing → 1 click approval
  • 3-6 weeks → 2-3 days
  • 23% error rate → <2% (AI + human verification)

Page 9

From Ingestion to AI Commerce: The Full Data Cable.

Tech Roadmap

Phase Capability Timeline Value
V1: Painkiller Ingestion bucket + extraction Q1 '25 -50% expert time
V2: Approval Human-in-the-loop UI Q2 '25 -70% expert time
V3: PIM Connect Direct Akeneo/Pimcore API Q3 '25 -80% expert time
V4: AI Commerce Export to ChatGPT, Google AI Q4 '25 New revenue channel
V5: Feedback Loop Sales data → optimization 2026 Continuous improvement

Architecture:

  • Frontend: Next.js, Tailwind, Shadcn
  • Backend: Supabase, Edge Functions
  • AI: Gemini 3 (interpretation), custom extraction models
  • Hosting: EU (Germany), GDPR compliant
  • Security: SSO, no PII storage, SOC2 roadmap

Expert Time Reduction:

Stage Manual With Ultra Relevant Savings
Extraction 8 hrs 0.5 hrs 94%
Mapping 4 hrs 0.5 hrs 88%
Review 6 hrs 2 hrs 67%
Total 18 hrs 3 hrs 83%

Page 10

Target: High-Velocity Product Companies Where Complexity Meets Launch Pressure.

ICP Definition

Company Profile:

  • 50+ SKU launches/year (high velocity)
  • 30+ fields per product (high complexity)
  • 3+ markets/languages (high coordination)
  • Using Akeneo, Pimcore, or Salsify

TAM Calculation:

  • Companies using modern PIM in DACH: ~2,400
  • Matching velocity/complexity criteria: ~850
  • Annual contract value: €96K (€8K/month)
  • TAM DACH: €82M

European expansion:

  • UK: ~1,200 targets → €115M
  • France: ~900 targets → €86M
  • Nordics: ~600 targets → €58M
  • TAM Europe: €340M

Industries (ranked by pain):

  1. Cosmetics: Claims, INCI, 100+ fields, 6+ languages
  2. Consumer Electronics: Specs, compatibility, 200+ fields
  3. Sporting Goods: Materials, sizing, seasonal pressure
  4. Furniture: Dimensions, variants, assembly instructions
  5. Manufacturing B2B: Datasheets, compliance, certifications

Page 11

Pipeline: We've Mapped 850 DACH Targets. 75 Are Ready for Outreach.

Target Funnel

Stage Count Criteria
All PIM users DACH 2,400 Using Akeneo, Pimcore, Salsify
Velocity filter 1,400 50+ SKU launches/year
Complexity filter 850 30+ fields, 3+ languages
Pain signals 320 Job postings, compliance issues, PIM RFPs
Warm intro possible 75 Network, events, advisors

Validated Conversations:

  • Essence (Cosnova): 400 attributes, massive manual effort
  • GHD: Seasonal launches, retailer compliance pain
  • Kellogg's: Multi-market coordination nightmare
  • Bosch: Sales team wasting time on data entry
  • Mann+Hummel: Pre-trade show data chaos
  • Wella: Translation and localization bottleneck

Advisor Pipeline:

  • Ex-CPO ProductsUp: Direct intros to 20+ brands
  • Akeneo partner network: Access to implementation customers

Page 12

Unit Economics: €8K/Month, 85% Gross Margin, 18-Month Payback.

Business Model

Pricing:

  • Base: €8,000/month (platform + 10 users)
  • Usage: €0.50 per AI-processed field
  • Average customer: €96K ACV

Unit Economics:

Metric Value
ACV €96,000
Gross Margin 85%
CAC €25,000
Payback 3.1 months
LTV (5yr, 10% churn) €384,000
LTV:CAC 15:1

Customer Value Proof:

Current State With Ultra Relevant Savings
€164K/yr expert cost €33K/yr (80% reduction) €131K/yr
12 compliance incidents 2 compliance incidents €100K/yr
6 weeks time-to-market 1 week time-to-market €250K+ (seasonal)
Total Annual Savings €480K+

ROI: 5x on €96K investment

Page 13

Financial Projections: €15M ARR by 2027, Path to €100M.

Growth Model

Year Customers ACV ARR Gross Margin
2025 5 €96K €480K 80%
2026 25 €110K €2.75M 83%
2027 120 €125K €15M 85%
2028 400 €150K €60M 87%
2029 800 €175K €140M 88%

Assumptions:

  • Land: 5 customers in 2025 (pilot partners)
  • Expand: ACV grows 15%/yr (more fields, more users)
  • Churn: 10% annual (sticky infrastructure)
  • Growth: 4x YoY 2026-2027, 3x 2028, 2x 2029

Valuation Path:

  • 2027 at 15x ARR (vertical SaaS): €225M
  • 2029 at 12x ARR: €1.7B

Comparable Exits:

  • Salsify: $1B+ valuation at $200M ARR
  • Akeneo: $400M+ valuation at ~$100M ARR
  • ProductsUp: Acquired for €1B+ by Bertelsmann

Page 14

We Are Ready to Start.

Page 15

The Team: Product + Engineering + GTM, Built for This Problem.

Team

Falco Punch — CEO

  • Founder since age 14
  • WHU alum
  • 2+ years building AI products for manufacturing
  • Built ultra-relevant.com solo → 40 companies, 6 paying
  • Speaks the language of PIM buyers
  • Based: Merantix AI Campus, Berlin

Elena [Last Name] — COO

  • WHU alum
  • Ex-VC (sourcing + due diligence)
  • Strong GTM network into consultancies + retailers
  • 5-year working relationship with Falco
  • Based: Berlin

Yusuf Barmini — CTO

  • Software engineer, Rust specialist
  • Building knowledge graphs for codebases
  • Directly transferable to product graphs
  • Collaborated with Falco at Merantix Hacker Room
  • Based: Berlin

Advisors:

  • Ex-CPO ProductsUp: Product strategy + customer intros
  • [Strategic Angel]: Retail network in manufacturing

Page 16

We're Raising €500K at €4M Pre-Money. €260K Already Committed.

The Ask

Round Structure:

  • Pre-money: €4M
  • Raise: €500K
  • Post-money: €4.5M
  • Dilution: 11%

Committed:

  • €260K from strategic family office (owns retailer in manufacturing)
  • Seeking: 5 angels × €48K average

Use of Funds (18-month runway):

Category Amount %
Engineering €300K 60%
Sales & Pilots €125K 25%
Operations €75K 15%

Milestones to Seed:

  • 5 pilot customers live
  • €400K+ ARR
  • PIM integrations: Akeneo, Pimcore, Salsify
  • Seed raise: €3-5M at €20M+ valuation

Page 17

Product Data Will Flow Through AI. We're Building the Pipe.

Falco Punch, CEO falco@ultra-relevant.com +49 XXX XXXXXXX

Elena [Last Name], COO elena@ultra-relevant.com +49 XXX XXXXXXX

Yusuf Barmini, CTO yusuf@ultra-relevant.com +49 XXX XXXXXXX

ultra-relevant.com

Berlin, Germany

2026 03 02 Gtm Overview Portal Design

documents/plans/2026-03-02-gtm-overview-portal-design.md

GTM Overview Portal - Design Document Overview A web portal hosted on Vercel that provides a complete overview of everything in the GTM repository. White, clean design. Four tabs: Pitch Decks (interac...

GTM Overview Portal - Design Document

Overview

A web portal hosted on Vercel that provides a complete overview of everything in the GTM repository. White, clean design. Four tabs: Pitch Decks (interactive embeds), Data Tables (CSV viewer), Images (gallery), Documents (rendered markdown). Built as a root-level Astro project.

Folder Reorganization

Move files into a clean structure without deleting anything:

gtm/
  decks/                          # All 12 pitch deck Astro projects
    ale-hop-deck/
    ale-hop-deck-de/
    flaconi-deck/
    furniture-pim-creative/
    hersteller-deck/
    pitch-deck-onepager/
    pitch-deck-sales-component/
    pitch-deck-template/
    pitch-deck-ultra-story/
    teveo-pitch/
    teveo-pitch-v3/
    the-hook-site/

  data/                           # All CSV/TSV/JSON data files
    pipeline/                     # Root-level master lists
      accounts.csv
      target-customers-full-data.csv
      target-customers-logo-wall.csv
      logo-wall-final.csv
      OUTREACH_LOL_MASTER - UNIFIED_MASTER-1.csv
    merger/                       # Merge scripts + TSV/CSV data
    leads/                        # Lead CSVs + combine script
    persons/                      # VP contact lists
    compare/                      # Overlap analysis CSVs + scripts
    allvideonames/                # Raw video company JSONs + dedup/split scripts
    allvideonames8/               # 8-way split video data
    allvideonames16/              # 16-way split video data
    allvideonames16answers/       # ICP analysis CSVs + transform scripts
    videos/                       # Video extraction CSVs only (not video files)

  images/                         # All image assets consolidated
    ultra/                        # Brand assets (logos, team photos, MVP screenshots)
    bijou/                        # Bijou Brigitte client assets

  documents/                      # All markdown/text narrative docs
    THE_HOOK.md
    ultra-relevant-story.md
    ultra-relevant-story-v1.md
    arcos.md
    ultra.txt
    prompt.txt
    plans/                        # Design & implementation plans

  _archive/                       # Uncategorized/legacy content
    pitch-deck.html               # Old standalone HTML pitch deck
    THE HOOK.docx                 # Word doc (MD version extracted)
    THE_HOOK_extracted/media/     # Hook document images
    allvideonames8_backup/        # Backup of 8-way split JSONs
    videos/                       # Original video files (2.1GB .mov/.mp4)

  portal/                         # The overview portal (new Astro project)

Files that stay at root: .git/, .gitignore, CLAUDE.md, package.json, package-lock.json, node_modules/.

Portal Architecture

Tech Stack

  • Astro 5.0, static output
  • No UI framework (pure Astro components + vanilla JS for interactivity)
  • Deployed to Vercel as single project

Project Structure

portal/
  src/
    layouts/Layout.astro
    pages/
      index.astro                 # Redirects to /decks
      decks/
        index.astro               # Deck grid
        [slug].astro              # Individual deck embed view
      data/
        index.astro               # CSV category list
        [...path].astro           # Individual CSV table view
      images/
        index.astro               # Image gallery
      documents/
        index.astro               # Document list
        [slug].astro              # Individual document view
    components/
      TabNav.astro                # Horizontal tab navigation
      DeckCard.astro              # Pitch deck card
      CsvTable.astro              # Expandable CSV table
      ImageGrid.astro             # Image gallery grid
      Lightbox.astro              # Image lightbox modal
    data/
      decks.json                  # Deck metadata (name, description, tags, slug)
      csvIndex.json               # CSV file registry (path, category, row count)
  public/
    decks/                        # Built output from each deck (copied at build time)
  astro.config.mjs
  package.json
  build-all.sh                    # Master build script

Tab 1: Pitch Decks (Landing Page)

Grid of 12 cards. Each card shows:

  • Deck name (human-readable)
  • Short description
  • Language tags (EN, DE)
  • Type badge (Client, Template, Company Story)

Grouped into sections:

  • Client Decks: ale-hop, ale-hop-de, flaconi, teveo-pitch, teveo-pitch-v3
  • Templates: hersteller-deck, pitch-deck-template, furniture-pim-creative
  • Company Story: pitch-deck-onepager, pitch-deck-sales-component, pitch-deck-ultra-story, the-hook-site

Click a card -> navigates to /decks/[slug] which shows the deck in a full-width iframe loading /decks/<name>/index.html. Back button returns to grid.

Tab 2: Data Tables

Lists all CSVs grouped by category:

  • Pipeline & Accounts (5 files)
  • Merger (3 files)
  • Leads (7 files)
  • Persons (7 files)
  • Compare/Overlap (8 files)
  • ICP Analysis (3 files)
  • Video Extractions (7 files)
  • Deck Data (hersteller-deck/datagtm/.csv, teveo-pitch-v3/datagtm/.csv, etc.)

Each CSV entry shows: file name, row count, column headers. Click to expand: renders first 10 rows in a styled table. "Show all rows" button loads the full table.

Tab 3: Images

Grid gallery of all images from images/ directory. Grouped by source: Brand Logos, Team Photos, MVP Screenshots, Client Assets. CSS grid with object-fit: cover thumbnails. Click to open full-size in a lightbox modal overlay.

Tab 4: Documents

List of all .md and .txt files from documents/. Each rendered as styled HTML (headings, lists, tables, code blocks). Also includes deck-specific markdown: VALUE-PROP-SLIDES.md, PITCH-DECK-PLAN.md, anforderungen.md, slides.md, storytwo.md, etc.

Visual Design

  • White background, clean minimal aesthetic
  • Inter font family (matching Ultra Relevant brand)
  • Tab bar: horizontal top nav, active tab has #2563eb bottom border
  • Cards: white, subtle box-shadow (0 1px 3px rgba(0,0,0,0.1)), 8px border-radius, hover lift
  • Tables: alternating rows (#f9fafb), sticky header, subtle borders
  • Image gallery: CSS grid, rounded corners
  • Responsive down to tablet width

Build Pipeline

#!/bin/bash
# build-all.sh

# 1. Build each deck
for deck in ../decks/*/; do
  (cd "$deck" && npm install && npm run build)
done

# 2. Copy built decks into portal/public
for deck in ../decks/*/; do
  name=$(basename "$deck")
  cp -r "$deck/dist/" "public/decks/$name/"
done

# 3. Build portal
npm install && npm run build

Vercel root directory set to portal/. Build command: bash build-all.sh.

Deck Metadata (decks.json)

[
  { "slug": "ale-hop-deck", "name": "Ale-Hop Pitch", "description": "Product data ingestion pitch for Ale-Hop retail chain", "tags": ["EN"], "group": "client" },
  { "slug": "ale-hop-deck-de", "name": "Ale-Hop Pitch (DE)", "description": "German version of the Ale-Hop pitch", "tags": ["DE"], "group": "client" },
  { "slug": "flaconi-deck", "name": "Flaconi Pitch", "description": "Beauty retail pitch for Flaconi", "tags": ["EN"], "group": "client" },
  { "slug": "furniture-pim-creative", "name": "Furniture PIM Creative", "description": "LinkedIn ad format - PIM creative for furniture industry", "tags": ["EN"], "group": "template" },
  { "slug": "hersteller-deck", "name": "Manufacturer Template", "description": "Dynamic template deck with JSON config. Currently configured for Vega.", "tags": ["EN", "DE"], "group": "template" },
  { "slug": "pitch-deck-onepager", "name": "Ultra Relevant Onepager", "description": "2-page PDF format company presentation with 36 slide components", "tags": ["EN"], "group": "story" },
  { "slug": "pitch-deck-sales-component", "name": "Sales Component Deck", "description": "Component-based sales presentation with reusable slides", "tags": ["EN"], "group": "story" },
  { "slug": "pitch-deck-template", "name": "Base Template", "description": "Foundation template for generating client-specific decks", "tags": ["EN"], "group": "template" },
  { "slug": "pitch-deck-ultra-story", "name": "Ultra Story", "description": "2-slide narrative deck - What We Found + AI Commerce Shift", "tags": ["EN"], "group": "story" },
  { "slug": "teveo-pitch", "name": "Teveo Pitch v1", "description": "First version Teveo-branded pitch with file-flow visualization", "tags": ["DE"], "group": "client" },
  { "slug": "teveo-pitch-v3", "name": "Teveo Pitch v3", "description": "Latest Teveo pitch - component-based with bilingual support and onepager", "tags": ["DE", "EN"], "group": "client" },
  { "slug": "the-hook-site", "name": "The Hook", "description": "THE HOOK framework - Death of the Funnel, Shortlist Era", "tags": ["EN"], "group": "story" }
]

2026 03 02 Gtm Overview Portal

documents/plans/2026-03-02-gtm-overview-portal.md

GTM Overview Portal Implementation Plan For Claude: REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task. Goal: Reorganize the GTM repo into clean categories and bui...

GTM Overview Portal Implementation Plan

For Claude: REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.

Goal: Reorganize the GTM repo into clean categories and build a white, minimal Astro portal with 4 tabs (Pitch Decks, Data Tables, Images, Documents) deployed to Vercel.

Architecture: Root-level Astro 5 static site at portal/. At build time, all 12 deck projects are built and their dist/ output is copied into portal/public/decks/. CSVs are parsed at build time and baked into HTML. Images are copied into portal/public/images/. Markdown is rendered at build time.

Tech Stack: Astro 5.0 (static), vanilla JS for client interactivity (lightbox, table expand), Inter font from Google Fonts, deployed to Vercel.

Design doc: docs/plans/2026-03-02-gtm-overview-portal-design.md


Task 1: Folder Reorganization - Create Directory Structure

Files:

  • Create directories: decks/, data/, data/pipeline/, images/, documents/, documents/plans/, _archive/, _archive/videos/

Step 1: Create the new directory tree

cd /Users/falco/Documents/Repositories/gtm
mkdir -p decks data/pipeline images documents/plans _archive/videos _archive/THE_HOOK_extracted_media

Step 2: Verify directories exist

ls -d decks data data/pipeline images documents documents/plans _archive _archive/videos

Expected: All directories listed without error.


Task 2: Folder Reorganization - Move Deck Projects

Step 1: Move all 12 deck directories into decks/

cd /Users/falco/Documents/Repositories/gtm
mv ale-hop-deck decks/
mv ale-hop-deck-de decks/
mv flaconi-deck decks/
mv furniture-pim-creative decks/
mv hersteller-deck decks/
mv pitch-deck-onepager decks/
mv pitch-deck-sales-component decks/
mv pitch-deck-template decks/
mv pitch-deck-ultra-story decks/
mv teveo-pitch decks/
mv teveo-pitch-v3 decks/
mv the-hook-site decks/

Step 2: Verify all 12 decks moved

ls decks/

Expected: 12 directories listed.


Task 3: Folder Reorganization - Move Data Files

Step 1: Move root CSVs into data/pipeline/

cd /Users/falco/Documents/Repositories/gtm
mv accounts.csv data/pipeline/
mv target-customers-full-data.csv data/pipeline/
mv target-customers-logo-wall.csv data/pipeline/
mv logo-wall-final.csv data/pipeline/
mv "OUTREACH_LOL_MASTER - UNIFIED_MASTER-1.csv" data/pipeline/

Step 2: Move data processing directories into data/

mv merger data/
mv leads data/
mv persons data/
mv compare data/
mv allvideonames data/
mv allvideonames8 data/
mv allvideonames16 data/
mv allvideonames16answers data/

Step 3: Move video CSVs into data/videos/, video files into _archive/videos/

mkdir -p data/videos
cp videos/*.csv data/videos/
mv videos/*.mov videos/*.mp4 _archive/videos/
rm -rf videos/

Step 4: Verify data directory

ls data/

Expected: pipeline, merger, leads, persons, compare, allvideonames, allvideonames8, allvideonames16, allvideonames16answers, videos


Task 4: Folder Reorganization - Move Images, Documents, Archive

Step 1: Move image directories

cd /Users/falco/Documents/Repositories/gtm
mv ultra images/
mv bijou images/

Step 2: Move document files

mv THE_HOOK_extracted/THE_HOOK.md documents/
mv ultra-relevant-story.md documents/
mv ultra-relevant-story-v1.md documents/
mv arcos.md documents/
mv ultra.txt documents/
mv prompt.txt documents/
mv docs/plans/*.md documents/plans/

Step 3: Move archive items

mv pitch-deck.html _archive/
mv "THE HOOK.docx" _archive/
mv THE_HOOK_extracted/media _archive/THE_HOOK_extracted_media/
rm -rf THE_HOOK_extracted
mv allvideonames8_backup _archive/

Step 4: Verify final structure

echo "=== Root ===" && ls -1 | grep -v node_modules | grep -v .git
echo "=== Decks ===" && ls decks/
echo "=== Data ===" && ls data/
echo "=== Images ===" && ls images/
echo "=== Documents ===" && ls documents/
echo "=== Archive ===" && ls _archive/

Step 5: Commit the reorganization

git add -A
git commit -m "refactor: reorganize GTM repo into decks/, data/, images/, documents/, _archive/"

Task 5: Scaffold Portal Astro Project

Files:

  • Create: portal/package.json
  • Create: portal/astro.config.mjs
  • Create: portal/tsconfig.json

Step 1: Initialize Astro project

cd /Users/falco/Documents/Repositories/gtm
mkdir -p portal/src/{layouts,pages/{decks,data,images,documents},components,data} portal/public/decks

Step 2: Create portal/package.json

{
  "name": "gtm-portal",
  "type": "module",
  "version": "1.0.0",
  "scripts": {
    "dev": "astro dev",
    "build": "astro build",
    "preview": "astro preview"
  },
  "dependencies": {
    "astro": "^5.0.0"
  }
}

Step 3: Create portal/astro.config.mjs

import { defineConfig } from 'astro/config';

export default defineConfig({
  output: 'static'
});

Step 4: Create portal/tsconfig.json

{
  "extends": "astro/tsconfigs/strict",
  "compilerOptions": {
    "strictNullChecks": true
  }
}

Step 5: Install dependencies and verify

cd portal && npm install

Step 6: Commit scaffold

cd /Users/falco/Documents/Repositories/gtm
git add portal/
git commit -m "feat: scaffold portal Astro project"

Task 6: Create Layout and Tab Navigation

Files:

  • Create: portal/src/layouts/Layout.astro
  • Create: portal/src/components/TabNav.astro

Step 1: Create Layout.astro

The base layout with white background, Inter font, and global styles.

---
interface Props {
  title?: string;
}
const { title = 'GTM Overview' } = Astro.props;
---
<!doctype html>
<html lang="en">
<head>
  <meta charset="UTF-8" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0" />
  <title>{title}</title>
  <link rel="preconnect" href="https://fonts.googleapis.com" />
  <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin />
  <link href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&display=swap" rel="stylesheet" />
</head>
<body>
  <header class="site-header">
    <div class="header-inner">
      <span class="logo">GTM Overview</span>
      <TabNav />
    </div>
  </header>
  <main>
    <slot />
  </main>
  <style is:global>
    *, *::before, *::after { margin: 0; padding: 0; box-sizing: border-box; }
    body {
      font-family: 'Inter', system-ui, -apple-system, sans-serif;
      background: #ffffff;
      color: #111827;
      line-height: 1.6;
      -webkit-font-smoothing: antialiased;
    }
    .site-header {
      position: sticky;
      top: 0;
      z-index: 100;
      background: #ffffff;
      border-bottom: 1px solid #e5e7eb;
    }
    .header-inner {
      max-width: 1280px;
      margin: 0 auto;
      padding: 0 24px;
      display: flex;
      align-items: center;
      gap: 32px;
      height: 56px;
    }
    .logo {
      font-weight: 700;
      font-size: 16px;
      color: #111827;
      white-space: nowrap;
    }
    main {
      max-width: 1280px;
      margin: 0 auto;
      padding: 32px 24px;
    }
  </style>
</body>
</html>

Note: Import TabNav at top of frontmatter: import TabNav from '../components/TabNav.astro';

Step 2: Create TabNav.astro

---
const currentPath = Astro.url.pathname;
const tabs = [
  { label: 'Pitch Decks', href: '/decks/' },
  { label: 'Data Tables', href: '/data/' },
  { label: 'Images', href: '/images/' },
  { label: 'Documents', href: '/documents/' },
];
---
<nav class="tab-nav">
  {tabs.map(tab => (
    <a
      href={tab.href}
      class:list={['tab-link', { active: currentPath.startsWith(tab.href) }]}
    >
      {tab.label}
    </a>
  ))}
</nav>

<style>
  .tab-nav {
    display: flex;
    gap: 4px;
  }
  .tab-link {
    padding: 16px 16px;
    font-size: 14px;
    font-weight: 500;
    color: #6b7280;
    text-decoration: none;
    border-bottom: 2px solid transparent;
    transition: color 0.15s, border-color 0.15s;
  }
  .tab-link:hover {
    color: #111827;
  }
  .tab-link.active {
    color: #2563eb;
    border-bottom-color: #2563eb;
  }
</style>

Step 3: Create index.astro (redirect to /decks/)

Create portal/src/pages/index.astro:

---
return Astro.redirect('/decks/');
---

Step 4: Run dev server to verify layout renders

cd /Users/falco/Documents/Repositories/gtm/portal
npx astro dev

Create a temporary portal/src/pages/decks/index.astro with placeholder content to see the layout:

---
import Layout from '../../layouts/Layout.astro';
---
<Layout title="Pitch Decks">
  <h1>Pitch Decks</h1>
  <p>Placeholder</p>
</Layout>

Verify: White page, Inter font, sticky header with tabs, "Pitch Decks" tab highlighted blue.

Step 5: Commit

git add portal/src/
git commit -m "feat: add portal layout with tab navigation"

Task 7: Create Deck Metadata and DeckCard Component

Files:

  • Create: portal/src/data/decks.json
  • Create: portal/src/components/DeckCard.astro

Step 1: Create decks.json

Copy the exact JSON from the design doc (12 entries with slug, name, description, tags, group).

Step 2: Create DeckCard.astro

---
interface Props {
  slug: string;
  name: string;
  description: string;
  tags: string[];
  group: string;
}
const { slug, name, description, tags, group } = Astro.props;

const groupLabels: Record<string, string> = {
  client: 'Client',
  template: 'Template',
  story: 'Company Story',
};
---
<a href={`/decks/${slug}/`} class="deck-card">
  <div class="card-header">
    <span class="group-badge">{groupLabels[group] || group}</span>
    <div class="tags">
      {tags.map(tag => <span class="tag">{tag}</span>)}
    </div>
  </div>
  <h3 class="card-title">{name}</h3>
  <p class="card-desc">{description}</p>
</a>

<style>
  .deck-card {
    display: block;
    background: #ffffff;
    border: 1px solid #e5e7eb;
    border-radius: 12px;
    padding: 24px;
    text-decoration: none;
    color: inherit;
    transition: box-shadow 0.2s, transform 0.2s;
  }
  .deck-card:hover {
    box-shadow: 0 4px 12px rgba(0,0,0,0.08);
    transform: translateY(-2px);
  }
  .card-header {
    display: flex;
    justify-content: space-between;
    align-items: center;
    margin-bottom: 12px;
  }
  .group-badge {
    font-size: 11px;
    font-weight: 600;
    text-transform: uppercase;
    letter-spacing: 0.05em;
    color: #6b7280;
    background: #f3f4f6;
    padding: 3px 8px;
    border-radius: 4px;
  }
  .tags { display: flex; gap: 4px; }
  .tag {
    font-size: 11px;
    font-weight: 500;
    color: #2563eb;
    background: #eff6ff;
    padding: 2px 6px;
    border-radius: 4px;
  }
  .card-title {
    font-size: 18px;
    font-weight: 600;
    margin-bottom: 6px;
    color: #111827;
  }
  .card-desc {
    font-size: 14px;
    color: #6b7280;
    line-height: 1.5;
  }
</style>

Step 3: Commit

git add portal/src/data/decks.json portal/src/components/DeckCard.astro
git commit -m "feat: add deck metadata and DeckCard component"

Task 8: Build Pitch Decks Grid Page

Files:

  • Modify: portal/src/pages/decks/index.astro

Step 1: Implement the deck grid page

Replace the placeholder with the full grid page that imports decks.json, groups them by group field, and renders DeckCard components in a CSS grid. Three sections: "Client Decks", "Templates", "Company Story". Each section has a heading and a 3-column grid of cards.

Use CSS grid: grid-template-columns: repeat(auto-fill, minmax(320px, 1fr)) with gap: 20px.

Section headings: font-size: 13px; font-weight: 600; text-transform: uppercase; letter-spacing: 0.05em; color: #9ca3af; margin-bottom: 16px; margin-top: 40px; (first section no margin-top).

Step 2: Run dev server, verify grid renders

cd /Users/falco/Documents/Repositories/gtm/portal && npx astro dev

Expected: 3 grouped sections, 12 cards total, hover effects work.

Step 3: Commit

git add portal/src/pages/decks/index.astro
git commit -m "feat: add pitch decks grid page with grouped sections"

Task 9: Build Deck Embed Page

Files:

  • Create: portal/src/pages/decks/[slug].astro

Step 1: Create the dynamic deck embed page

---
import Layout from '../../layouts/Layout.astro';
import decks from '../../data/decks.json';

export function getStaticPaths() {
  return decks.map(d => ({ params: { slug: d.slug } }));
}

const { slug } = Astro.params;
const deck = decks.find(d => d.slug === slug);
if (!deck) return Astro.redirect('/decks/');
---
<Layout title={deck.name}>
  <div class="embed-header">
    <a href="/decks/" class="back-link">&larr; All Decks</a>
    <h1 class="embed-title">{deck.name}</h1>
    <div class="embed-tags">
      {deck.tags.map(tag => <span class="tag">{tag}</span>)}
    </div>
  </div>
  <div class="embed-container">
    <iframe src={`/decks/${slug}/index.html`} class="deck-iframe"></iframe>
  </div>
</Layout>

<style>
  .embed-header {
    display: flex;
    align-items: center;
    gap: 16px;
    margin-bottom: 24px;
  }
  .back-link {
    font-size: 14px;
    color: #6b7280;
    text-decoration: none;
  }
  .back-link:hover { color: #2563eb; }
  .embed-title {
    font-size: 20px;
    font-weight: 600;
  }
  .tag {
    font-size: 11px;
    font-weight: 500;
    color: #2563eb;
    background: #eff6ff;
    padding: 2px 6px;
    border-radius: 4px;
  }
  .embed-tags { display: flex; gap: 4px; }
  .embed-container {
    border: 1px solid #e5e7eb;
    border-radius: 12px;
    overflow: hidden;
    aspect-ratio: 16/9;
  }
  .deck-iframe {
    width: 100%;
    height: 100%;
    border: none;
  }
</style>

Step 2: Verify with dev server

Navigate to /decks/ale-hop-deck/. Should show the back link, title, and an iframe (empty until decks are built and copied).

Step 3: Commit

git add portal/src/pages/decks/\[slug\].astro
git commit -m "feat: add deck embed page with iframe viewer"

Task 10: Build CSV Index and Data Tables Page

Files:

  • Create: portal/src/data/csvIndex.json
  • Create: portal/src/components/CsvTable.astro
  • Create: portal/src/pages/data/index.astro

Step 1: Create csvIndex.json

JSON array of all CSV files with their category, path relative to repo root, and human-readable name. Group into categories:

{
  "categories": [
    {
      "name": "Pipeline & Accounts",
      "files": [
        { "path": "data/pipeline/accounts.csv", "label": "Accounts" },
        { "path": "data/pipeline/target-customers-full-data.csv", "label": "Target Customers (Full Data)" },
        { "path": "data/pipeline/target-customers-logo-wall.csv", "label": "Target Customers (Logo Wall)" },
        { "path": "data/pipeline/logo-wall-final.csv", "label": "Logo Wall Final" },
        { "path": "data/pipeline/OUTREACH_LOL_MASTER - UNIFIED_MASTER-1.csv", "label": "Outreach Unified Master" }
      ]
    },
    {
      "name": "Merger",
      "files": [
        { "path": "data/merger/master.csv", "label": "Master (Merged)" },
        { "path": "data/merger/master_pain.csv", "label": "Master (Pain Points)" }
      ]
    },
    {
      "name": "Leads",
      "files": [
        { "path": "data/leads/leads_combined.csv", "label": "Leads Combined" },
        { "path": "data/leads/CATEGORY.csv", "label": "Category Leads" },
        { "path": "data/leads/DIGITAL1.csv", "label": "Digital Leads 1" },
        { "path": "data/leads/DIGITAL2.csv", "label": "Digital Leads 2" },
        { "path": "data/leads/ECOM1.csv", "label": "E-Commerce Leads" },
        { "path": "data/leads/PRODUCT1.csv", "label": "Product Leads 1" },
        { "path": "data/leads/PRODUCT2.csv", "label": "Product Leads 2" }
      ]
    },
    {
      "name": "Persons (VP Contacts)",
      "files": [
        { "path": "data/persons/VP-MASTER-LIST-FIXED.csv", "label": "VP Master List" },
        { "path": "data/persons/VP-of-Digital-CLEANED.csv", "label": "VP of Digital (Cleaned)" },
        { "path": "data/persons/VP-of-Ecom-CLEANED.csv", "label": "VP of E-Commerce (Cleaned)" },
        { "path": "data/persons/VP-of-Operations-CLEANED.csv", "label": "VP of Operations (Cleaned)" }
      ]
    },
    {
      "name": "Compare / Overlap Analysis",
      "files": [
        { "path": "data/compare/UNIFIED_MASTER.csv", "label": "Unified Master" },
        { "path": "data/compare/ACCOUNTS_SUMMARY.csv", "label": "Accounts Summary" },
        { "path": "data/compare/PEOPLE_SUMMARY.csv", "label": "People Summary" },
        { "path": "data/compare/ACCOUNTS_ONLY.csv", "label": "Accounts Only" },
        { "path": "data/compare/PEOPLE_ONLY.csv", "label": "People Only" },
        { "path": "data/compare/OVERLAP_DETAILS.csv", "label": "Overlap Details" },
        { "path": "data/compare/leads.csv", "label": "Compare Leads" },
        { "path": "data/compare/listofdomains.csv", "label": "Domain List" }
      ]
    },
    {
      "name": "ICP Analysis",
      "files": [
        { "path": "data/allvideonames16answers/icp_master.csv", "label": "ICP Master (1000+ companies)" },
        { "path": "data/allvideonames16answers/icp_filtered.csv", "label": "ICP Filtered (High Relevance)" },
        { "path": "data/allvideonames16answers/icp_by_industry.csv", "label": "ICP by Industry" }
      ]
    },
    {
      "name": "Video Extractions",
      "files": [
        { "path": "data/videos/fulllist.csv", "label": "Full Ranked List" },
        { "path": "data/videos/fulllistsorted.csv", "label": "Full List (Sorted by Industry)" },
        { "path": "data/videos/200icp.csv", "label": "Top 200 ICP" },
        { "path": "data/videos/companies_part1_clean.csv", "label": "Companies Part 1 (Clean)" },
        { "path": "data/videos/companies_part2_clean.csv", "label": "Companies Part 2 (Clean)" }
      ]
    }
  ]
}

Step 2: Create CsvTable.astro

An Astro component that receives CSV data (headers + rows) and renders a styled table with first 10 rows shown. "Show all N rows" button expands via client-side JS.

Props: headers: string[], rows: string[][], totalRows: number, label: string.

Table styles: alternating row colors (#f9fafb), sticky header, font-size: 13px, horizontal scroll on overflow, border-collapse: collapse, border: 1px solid #e5e7eb.

Client-side <script> toggles a data-expanded attribute to show/hide remaining rows.

Step 3: Create data/index.astro

At build time, reads each CSV file from the repo using fs.readFileSync, parses CSV (split by comma, handle quoted fields), extracts headers and rows. Renders categories as sections with CsvTable for each file.

Use Node.js fs and path in frontmatter to read files relative to the repo root (../../data/... from the portal directory).

Each category gets a section heading. Each file gets a CsvTable with its label, headers, and rows.

Step 4: Run dev server to verify tables render

Expected: Categories listed with expandable tables, alternating row colors, horizontal scroll for wide CSVs.

Step 5: Commit

git add portal/src/data/csvIndex.json portal/src/components/CsvTable.astro portal/src/pages/data/index.astro
git commit -m "feat: add data tables page with CSV viewer"

Task 11: Build Image Gallery Page

Files:

  • Create: portal/src/pages/images/index.astro
  • Create: portal/src/components/Lightbox.astro

Step 1: Create images/index.astro

At build time, scan images/ directory recursively for image files (.png, .jpg, .jpeg, .svg, .webp). Group by subdirectory (ultra/logos, ultra/team, ultra/mvp, bijou). Copy images into portal/public/gallery/ at build time (or reference them directly).

Display as CSS grid: grid-template-columns: repeat(auto-fill, minmax(180px, 1fr)), gap: 12px. Each thumbnail: aspect-ratio: 1, object-fit: cover, border-radius: 8px, cursor: pointer.

Below each image: file name in font-size: 11px; color: #9ca3af.

Step 2: Create Lightbox.astro

Client-side component. When a thumbnail is clicked, shows a full-screen overlay (position: fixed; inset: 0; background: rgba(0,0,0,0.8); z-index: 200) with the image centered at max-width/max-height 90vw/90vh. Click overlay or press Escape to close.

Implement as a <script> block that:

  1. Adds click listeners to all .gallery-thumb elements
  2. Creates a modal div on click
  3. Listens for Escape key and overlay click to close

Step 3: Run dev server, verify gallery renders

Expected: Grid of thumbnails grouped by section, click opens lightbox.

Step 4: Commit

git add portal/src/pages/images/index.astro portal/src/components/Lightbox.astro
git commit -m "feat: add image gallery page with lightbox"

Task 12: Build Documents Page

Files:

  • Create: portal/src/pages/documents/index.astro

Step 1: Create documents/index.astro

At build time, scan documents/ for .md and .txt files. For .md files, render to HTML using a simple markdown-to-HTML approach (Astro has built-in markdown support, or use a lightweight library).

Display as a list of expandable cards. Each card shows:

  • File name as heading
  • First 3 lines as preview
  • Click to expand shows the full rendered content

Also scan deck directories for markdown files:

  • decks/pitch-deck-onepager/VALUE-PROP-SLIDES.md
  • decks/pitch-deck-onepager/PITCH-DECK-PLAN.md
  • decks/teveo-pitch/anforderungen.md
  • decks/teveo-pitch/headlines.md
  • decks/teveo-pitch/storyold.md
  • decks/teveo-pitch/slide-plan.md
  • decks/teveo-pitch-v3/storytwo.md
  • decks/teveo-pitch-v3/slides.md

Group into "Strategy Documents" (from documents/) and "Deck Notes" (from decks/).

Card styles: same as DeckCard - white background, subtle border, rounded corners.

For .txt files, wrap in <pre> with monospace font.

Step 2: Run dev server, verify documents render

Expected: Markdown rendered as HTML with proper headings, lists, tables. Text files shown as preformatted.

Step 3: Commit

git add portal/src/pages/documents/index.astro
git commit -m "feat: add documents page with markdown rendering"

Task 13: Build Pipeline Script and Copy Assets

Files:

  • Create: portal/build-all.sh
  • Create: portal/copy-assets.sh

Step 1: Create build-all.sh

#!/bin/bash
set -e

SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
REPO_ROOT="$(dirname "$SCRIPT_DIR")"

echo "=== Building all decks ==="
for deck in "$REPO_ROOT"/decks/*/; do
  name=$(basename "$deck")
  echo "Building $name..."
  (cd "$deck" && npm install --silent && npm run build)
done

echo "=== Copying deck builds into portal/public/decks ==="
mkdir -p "$SCRIPT_DIR/public/decks"
for deck in "$REPO_ROOT"/decks/*/; do
  name=$(basename "$deck")
  if [ -d "$deck/dist" ]; then
    cp -r "$deck/dist/." "$SCRIPT_DIR/public/decks/$name/"
    echo "Copied $name"
  fi
done

echo "=== Copying images into portal/public/gallery ==="
mkdir -p "$SCRIPT_DIR/public/gallery"
cp -r "$REPO_ROOT/images/"* "$SCRIPT_DIR/public/gallery/"

echo "=== Building portal ==="
cd "$SCRIPT_DIR"
npm install --silent
npm run build

echo "=== Done ==="

Step 2: Make executable

chmod +x portal/build-all.sh

Step 3: Commit

git add portal/build-all.sh
git commit -m "feat: add build-all.sh pipeline script"

Task 14: Update .gitignore and CLAUDE.md

Files:

  • Modify: .gitignore
  • Modify: CLAUDE.md

Step 1: Update .gitignore

Add:

portal/public/decks/
portal/public/gallery/
portal/dist/

These are build artifacts that shouldn't be committed.

Step 2: Update CLAUDE.md

Add a Portal section describing the new structure:

## Portal (GTM Overview)

A web portal at `portal/` that provides a browsable overview of the entire repo.

### Development
\```bash
cd portal
npm install
npm run dev          # Local dev server
\```

For full build with all decks:
\```bash
cd portal
bash build-all.sh   # Builds all decks + copies assets + builds portal
\```

### Folder Structure
- `decks/` - All 12 Astro pitch deck projects
- `data/` - CSV/TSV/JSON data files (pipeline, merger, leads, persons, compare, ICP analysis)
- `images/` - Brand assets, team photos, client images
- `documents/` - Strategy docs, pitch narratives, design plans
- `_archive/` - Legacy files, video files, backups
- `portal/` - The overview portal Astro project

Step 3: Commit

git add .gitignore CLAUDE.md
git commit -m "chore: update .gitignore and CLAUDE.md for portal structure"

Task 15: Test Full Build and Verify

Step 1: Run the full build pipeline

cd /Users/falco/Documents/Repositories/gtm/portal
bash build-all.sh

This will take a while (12 deck builds + portal build).

Step 2: Preview the built site

cd /Users/falco/Documents/Repositories/gtm/portal
npx astro preview

Step 3: Verify all 4 tabs work

  • Pitch Decks: Grid shows 12 cards, clicking opens iframe with actual deck content
  • Data Tables: Categories listed, tables render with CSV data, expand works
  • Images: Gallery shows thumbnails, lightbox opens on click
  • Documents: Markdown rendered as HTML, text files shown

Step 4: Final commit

git add -A
git commit -m "feat: complete GTM overview portal with decks, data, images, documents"

Task 16: Vercel Deployment

Step 1: Configure Vercel

Set root directory to portal/. Build command: bash build-all.sh. Output directory: dist/.

Or create vercel.json at repo root:

{
  "buildCommand": "cd portal && bash build-all.sh",
  "outputDirectory": "portal/dist",
  "installCommand": "cd portal && npm install"
}

Step 2: Deploy

cd /Users/falco/Documents/Repositories/gtm
npx vercel

Follow the prompts. Link to existing project or create new one.

Step 3: Verify live deployment

Open the Vercel URL and verify all 4 tabs work correctly.

Step 4: Commit Vercel config

git add vercel.json
git commit -m "chore: add Vercel deployment config"

2026 03 02 Teveo Onepager Design

documents/plans/2026-03-02-teveo-onepager-design.md

Teveo Onepager Design Overview Single A4 portrait page (794x1123px) containing the full Teveo pitch story from storytwo.md. Magazine editorial layout with the existing deck's brand system (Inter Tight...

Teveo Onepager Design

Overview

Single A4 portrait page (794x1123px) containing the full Teveo pitch story from storytwo.md. Magazine editorial layout with the existing deck's brand system (Inter Tight, #6b7ff5 blue accent, gray palette). Content is ~90% word-for-word from storytwo.md with light trimming of filler transitions.

Technical Approach

  • New Astro page at src/pages/onepager.astro
  • Reuses the existing Layout.astro with a CSS override for A4 dimensions instead of 16:9 slides
  • Same fonts (Inter Tight + Geist Mono), same color variables, same card styling
  • Print-optimized: @page set to A4, @media print styles for clean PDF export

Section Layout (~1120px total content height within A4)

1. HEADER (~80px)

  • Teveo logo (left) | separator | UltraRelevant logo
  • Right-aligned: "NOETIQ ONE GmbH - Marz 2026"
  • Thin gray separator line below

2. AUSGANGSLAGE (~120px)

  • Label: 01 AUSGANGSLAGE (mono, uppercase, 9px)
  • Full paragraph from storytwo section 1, lightly trimmed
  • Key content: weekly launches, 3 people, 4 file sources, Akeneo migration planned, translations via external agency

3. DER AUTOMATISIERTE PROZESS (~280px, two-column)

  • Label: 02 WAS DIE SOFTWARE MACHT
  • Left column: opening paragraph + numbered 5-step list (Auslesen, Texte, GEO, Ubersetzen, Export)
  • Right column: simplified pipeline visual (Input files -> UltraRelevant bucket -> Output data card)
  • Below: promise bullets row ("Keine Enrichment-Plattform / Kein Katalog hochladen / Produktdaten nie gespeichert / Modelle immer aktuell")

4. GEO USP (~200px, highlighted)

  • Label: 03 GENERATIVE ENGINE OPTIMIZATION
  • Subtle blue-tinted background (rgba(107,127,245,0.03)) to visually distinguish as the USP
  • Full storytwo section 3 text, lightly trimmed
  • Small callout: "Basierend auf ultrarelevant.com"

5. VALUE (~100px)

  • Label: 04 BESSERE DATEN, BESSERE PERFORMANCE
  • Before/after comparison strip: Tage -> 3 Minuten, Handarbeit -> Automatisiert, Unsichtbar -> Empfohlen von ChatGPT
  • Key sentence about faster launches impacting revenue

6. TEAM FOOTER (~80px)

  • Elena + Falco: photos, names, emails, LinkedIn icons
  • Product URL: productdata.ultrarelevant.com/de
  • Company line: NOETIQ ONE GmbH

Font Sizes (print-density adapted)

  • Section labels: 9px Geist Mono, uppercase, letter-spaced
  • Section headlines: 14-16px Inter Tight, 700 weight
  • Body text: 11-12px Inter Tight, 400 weight
  • Fine print/captions: 9-10px

Content Source Mapping

Onepager Section storytwo.md Section Treatment
Header - Brand only, no storytwo content
Ausgangslage Section 1 Near-verbatim, trim "Der aktuelle Prozess..." opener
Pipeline Section 2 Near-verbatim, all 5 steps preserved
GEO Section 3 Near-verbatim, trim "Der Prozess funktioniert so:"
Value Section 4 Near-verbatim
Team Section 5 Same team info as deck

2026 03 02 Teveo Onepager

documents/plans/2026-03-02-teveo-onepager.md

Teveo Onepager Implementation Plan For Claude: REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task. Goal: Create a single A4 portrait onepager page containing the T...

Teveo Onepager Implementation Plan

For Claude: REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.

Goal: Create a single A4 portrait onepager page containing the Teveo pitch story from storytwo.md, styled with the existing deck's brand system.

Architecture: New Astro page at src/pages/onepager.astro that reuses the existing Layout.astro. Override slide dimensions to A4 portrait (794x1123px). All content hardcoded in-page (no dynamic data). Magazine editorial layout with 6 sections flowing top-to-bottom.

Tech Stack: Astro, HTML, CSS (scoped). Same fonts (Inter Tight + Geist Mono) and color variables from existing Layout.astro.


Task 1: Create onepager.astro with Layout and A4 container

Files:

  • Create: teveo-pitch-v3/src/pages/onepager.astro

Step 1: Create the page file with Layout import, A4 container, and base styles

The page imports Layout.astro and creates a single .onepager container that overrides the slide dimensions to A4 portrait. Add CSS custom properties for the A4 size and override .slide dimensions.

---
import Layout from '../layouts/Layout.astro';
---

<Layout title="Teveo x UltraRelevant - Onepager">
<section class="onepager">
  <!-- Sections will be added in subsequent tasks -->
</section>
</Layout>

<style>
  .onepager {
    width: 794px;
    min-height: 1123px;
    background: var(--white);
    box-shadow: 0 4px 24px rgba(0, 0, 0, 0.1);
    border-radius: 4px;
    position: relative;
    overflow: hidden;
    padding: 48px 56px;
    display: flex;
    flex-direction: column;
    gap: 0;
  }

  @media print {
    .onepager {
      box-shadow: none;
      border-radius: 0;
      width: 100%;
      min-height: 100vh;
    }
  }

  @page {
    size: A4 portrait;
    margin: 0;
  }
</style>

Step 2: Verify it renders

Run: cd teveo-pitch-v3 && npx astro dev and open localhost:4321/onepager Expected: A white A4-shaped rectangle on gray background, empty.

Step 3: Commit

git add teveo-pitch-v3/src/pages/onepager.astro
git commit -m "feat: add empty onepager page with A4 container"

Task 2: Add HEADER section (logos + meta)

Files:

  • Modify: teveo-pitch-v3/src/pages/onepager.astro

Step 1: Add header section inside .onepager

Replace the placeholder comment with the header. Uses same logo paths from index.astro. Teveo logo left, separator, UltraRelevant logo. Right side: company + date. Thin line below.

<!-- HEADER -->
<div class="op-header">
  <div class="op-header-left">
    <img src="/images/logos/teveo-logo.svg" alt="Teveo" class="op-logo-client" />
    <span class="op-logo-sep">|</span>
    <img src="/images/UltraRelevantLOGOBLUE.png" alt="UltraRelevant" class="op-logo-ur" />
  </div>
  <div class="op-header-right mono">
    <span>NOETIQ ONE GmbH</span>
    <span class="op-header-date">Marz 2026</span>
  </div>
</div>
<div class="op-divider"></div>

Step 2: Add header styles

.op-header {
  display: flex;
  justify-content: space-between;
  align-items: center;
  padding-bottom: 16px;
}

.op-header-left {
  display: flex;
  align-items: center;
  gap: 16px;
}

.op-logo-client {
  height: 32px;
}

.op-logo-sep {
  font-size: 20px;
  font-weight: 300;
  color: var(--gray-300);
}

.op-logo-ur {
  height: 22px;
  opacity: 0.7;
}

.op-header-right {
  display: flex;
  flex-direction: column;
  align-items: flex-end;
  gap: 2px;
  font-size: 10px;
  font-weight: 600;
  letter-spacing: 0.5px;
  color: var(--gray-400);
}

.op-header-date {
  font-weight: 400;
}

.op-divider {
  height: 1px;
  background: var(--gray-200);
  margin-bottom: 20px;
}

Step 3: Verify - Check that logos render and the header looks balanced.

Step 4: Commit

git add teveo-pitch-v3/src/pages/onepager.astro
git commit -m "feat: add onepager header with logos and meta"

Task 3: Add AUSGANGSLAGE section

Files:

  • Modify: teveo-pitch-v3/src/pages/onepager.astro

Step 1: Add section HTML after the divider

<!-- 01 AUSGANGSLAGE -->
<div class="op-section">
  <div class="op-label mono">01 AUSGANGSLAGE</div>
  <p class="op-body">Teveo bringt wochentlich neue Produkte heraus. Momentan tragen drei Personen die Produktdaten aus vier verschiedenen Dateiquellen in Shopify ein: dem Rangeplan, Sizingcharts, Fotos sowie Material- und Metafield-Informationen, die in Excel-Tabellen, JSON-Dateien oder als PNG vorliegen. In Zukunft soll diese Datenpflege in einem neuen Akeneo PIM stattfinden. Fur die internationalen Shops werden die Texte bisher an eine externe Agentur geschickt.</p>
</div>

Step 2: Add section styles (shared by all sections)

.op-section {
  margin-bottom: 20px;
}

.op-label {
  font-size: 9px;
  font-weight: 700;
  letter-spacing: 1.5px;
  color: var(--gray-400);
  margin-bottom: 8px;
  text-transform: uppercase;
}

.op-body {
  font-size: 12px;
  line-height: 1.6;
  color: var(--gray-600);
}

Step 3: Verify and commit

git add teveo-pitch-v3/src/pages/onepager.astro
git commit -m "feat: add Ausgangslage section to onepager"

Task 4: Add PIPELINE section (two-column: text + visual)

Files:

  • Modify: teveo-pitch-v3/src/pages/onepager.astro

Step 1: Add the two-column pipeline section

Left column: intro paragraph + 5 numbered steps. Right column: simplified vertical pipeline visual (files -> bucket -> output). Below both: promise bullets row.

Content from storytwo.md section 2, lightly trimmed.

<!-- 02 WAS DIE SOFTWARE MACHT -->
<div class="op-section">
  <div class="op-label mono">02 WAS DIE SOFTWARE MACHT</div>
  <div class="op-two-col">
    <!-- Left: Text -->
    <div class="op-col-text">
      <p class="op-body op-body-intro">Wir andern den Prozess bei Teveo nicht. Stattdessen automatisieren wir exakt den Workflow, den sie ohnehin schon haben. Dateien rein, fertige Produktdaten raus. Der gesamte Vorgang dauert ungefahr drei Minuten.</p>
      <ol class="op-steps">
        <li><strong>Daten auslesen & zuordnen:</strong> Das Tool erkennt automatisch, was in den hochgeladenen Dokumenten steht, und ordnet die Informationen den korrekten Feldern zu. Fur Akeneo, Shopify, Zalando.</li>
        <li><strong>Marketingtexte schreiben:</strong> Eine KI generiert die Produktbeschreibungen im Schreibstil, den Teveo auf der bestehenden Website verwendet. Ohne manuelle Einstellungen.</li>
        <li><strong>Intent-basierte Optimierung:</strong> Die Produktdaten werden fur KI-Suchmaschinen angereichert.</li>
        <li><strong>Ubersetzen:</strong> Automatisch uber KI-Modelle oder DeepL. Agentur kann weiterhin eingebunden oder nur zur Freigabe involviert werden.</li>
        <li><strong>Export:</strong> Saubere, strukturierte Daten fur Akeneo, Shopify, Zalando, Google und ChatGPT. Per Link teilbar, prufbar, herunterladbar.</li>
      </ol>
    </div>
    <!-- Right: Pipeline visual -->
    <div class="op-col-visual">
      <div class="op-pipe-input">
        <div class="op-pipe-label mono">INPUT</div>
        <div class="op-pipe-files">
          <span class="op-file-tag">Rangeplan.pdf</span>
          <span class="op-file-tag">Size_Chart.xlsx</span>
          <span class="op-file-tag">Metafield.json</span>
          <span class="op-file-tag">Produktbilder.png</span>
        </div>
      </div>
      <svg class="op-pipe-arrow" viewBox="0 0 24 32" fill="none">
        <path d="M12 4V24" stroke="var(--gray-300)" stroke-width="1.5" stroke-linecap="round"/>
        <polygon points="6,22 12,30 18,22" fill="var(--gray-300)"/>
      </svg>
      <div class="op-pipe-bucket">
        <img src="/images/UltraRelevantLOGOBLUE.png" alt="UltraRelevant" class="op-pipe-bucket-logo" />
        <div class="op-pipe-bucket-grid">
          <span>Extraktion</span><span>Mapping</span>
          <span>Kreation</span><span>Ubersetzung</span>
        </div>
      </div>
      <svg class="op-pipe-arrow" viewBox="0 0 24 32" fill="none">
        <path d="M12 4V24" stroke="var(--gray-300)" stroke-width="1.5" stroke-linecap="round"/>
        <polygon points="6,22 12,30 18,22" fill="var(--gray-300)"/>
      </svg>
      <div class="op-pipe-output">
        <div class="op-pipe-label mono">OUTPUT</div>
        <div class="op-pipe-destinations">Akeneo - Shopify - Zalando - Google - ChatGPT</div>
      </div>
    </div>
  </div>
  <!-- Promise bullets -->
  <div class="op-promises">
    <div class="op-promise"><strong>Keine Enrichment-Plattform.</strong> Super einfach.</div>
    <div class="op-promise-div"></div>
    <div class="op-promise"><strong>Kein Katalog hochladen.</strong> Kein Onboarding.</div>
    <div class="op-promise-div"></div>
    <div class="op-promise"><strong>Produktdaten nie gespeichert.</strong> Verarbeitung in DE.</div>
  </div>
</div>

Step 2: Add two-column and pipeline styles

.op-two-col {
  display: flex;
  gap: 28px;
}

.op-col-text {
  flex: 1;
  min-width: 0;
}

.op-col-visual {
  width: 200px;
  flex-shrink: 0;
  display: flex;
  flex-direction: column;
  align-items: center;
  gap: 4px;
}

.op-body-intro {
  margin-bottom: 10px;
}

.op-steps {
  padding-left: 18px;
  font-size: 11px;
  line-height: 1.55;
  color: var(--gray-600);
}

.op-steps li {
  margin-bottom: 4px;
}

.op-steps strong {
  color: var(--gray-700);
}

/* Pipeline visual */
.op-pipe-input, .op-pipe-output {
  text-align: center;
}

.op-pipe-label {
  font-size: 9px;
  font-weight: 700;
  letter-spacing: 1px;
  color: var(--gray-400);
  margin-bottom: 4px;
}

.op-pipe-files {
  display: flex;
  flex-direction: column;
  gap: 3px;
  align-items: center;
}

.op-file-tag {
  font-size: 9px;
  font-family: 'Geist Mono', monospace;
  color: var(--gray-500);
  background: var(--gray-50);
  border: 1px solid var(--gray-200);
  border-radius: 4px;
  padding: 2px 8px;
}

.op-pipe-arrow {
  width: 24px;
  height: 28px;
  flex-shrink: 0;
}

.op-pipe-bucket {
  background: linear-gradient(180deg, #8ba4f8 0%, #6b7ff5 100%);
  border-radius: 12px;
  padding: 10px 14px;
  display: flex;
  flex-direction: column;
  align-items: center;
  gap: 6px;
  box-shadow: 0 4px 16px rgba(100,120,230,0.2);
  width: 100%;
}

.op-pipe-bucket-logo {
  height: 12px;
  filter: brightness(0) invert(1);
  opacity: 0.8;
}

.op-pipe-bucket-grid {
  display: grid;
  grid-template-columns: 1fr 1fr;
  gap: 4px;
  width: 100%;
}

.op-pipe-bucket-grid span {
  font-size: 9px;
  font-weight: 600;
  color: white;
  text-align: center;
  background: rgba(255,255,255,0.18);
  border: 1px solid rgba(255,255,255,0.25);
  border-radius: 6px;
  padding: 5px 4px;
}

.op-pipe-destinations {
  font-size: 9px;
  color: var(--gray-500);
  text-align: center;
}

/* Promise bullets */
.op-promises {
  display: flex;
  align-items: center;
  justify-content: center;
  gap: 16px;
  margin-top: 14px;
  padding-top: 12px;
  border-top: 1px solid var(--gray-100);
}

.op-promise {
  font-size: 10px;
  color: var(--gray-500);
  text-align: center;
}

.op-promise strong {
  color: var(--gray-700);
}

.op-promise-div {
  width: 1px;
  height: 20px;
  background: var(--gray-200);
}

Step 3: Verify - Check two-column layout renders, pipeline visual is centered and compact.

Step 4: Commit

git add teveo-pitch-v3/src/pages/onepager.astro
git commit -m "feat: add pipeline section with two-column layout to onepager"

Task 5: Add GEO USP section (highlighted)

Files:

  • Modify: teveo-pitch-v3/src/pages/onepager.astro

Step 1: Add GEO section with blue-tinted background

Content from storytwo.md section 3, lightly trimmed.

<!-- 03 GENERATIVE ENGINE OPTIMIZATION -->
<div class="op-section op-geo-section">
  <div class="op-label mono">03 GENERATIVE ENGINE OPTIMIZATION</div>
  <p class="op-body">Neben der reinen Automatisierung sorgen wir dafur, dass die Produkte auch dort stattfinden, wo die Kunden suchen: in KIs wie ChatGPT, Gemini oder den Google AI Overviews. Das Ziel ist, dass Teveo-Produkte von der KI empfohlen werden, wenn Nutzer offene Fragen stellen.</p>
  <p class="op-body">Wir analysieren mit externen Datenquellen, was ChatGPT und Gemini bei Anfragen wie <em>"lockeres Langarmshirt furs Gym"</em> oder <em>"Pump Cover Damen Baumwolle"</em> tatsachlich ausgeben und welche Kriterien sie hervorheben. Basierend auf dieser Analyse bekommt der Marketer direkt Vorschlage, welche Intents er in den Text einbauen kann. Er entscheidet, was rein kommt.</p>
  <p class="op-body op-geo-callout">Basierend auf <strong>ultrarelevant.com</strong>, unserem GEO-Tracking-Tool, das analysiert, wo Brands in KI-Suchmaschinen auftauchen.</p>
</div>

Step 2: Add GEO styles

.op-geo-section {
  background: rgba(107, 127, 245, 0.03);
  border: 1px solid rgba(107, 127, 245, 0.08);
  border-radius: 10px;
  padding: 16px 20px;
}

.op-geo-section .op-body {
  margin-bottom: 6px;
}

.op-geo-section .op-body:last-of-type {
  margin-bottom: 0;
}

.op-geo-callout {
  font-size: 11px;
  font-weight: 500;
  color: #6b7ff5;
  margin-top: 8px;
}

Step 3: Verify and commit

git add teveo-pitch-v3/src/pages/onepager.astro
git commit -m "feat: add GEO USP section with highlight to onepager"

Task 6: Add VALUE section (before/after strip)

Files:

  • Modify: teveo-pitch-v3/src/pages/onepager.astro

Step 1: Add value comparison strip

<!-- 04 BESSERE DATEN, BESSERE PERFORMANCE -->
<div class="op-section">
  <div class="op-label mono">04 BESSERE DATEN, BESSERE PERFORMANCE</div>
  <div class="op-value-strip">
    <div class="op-value-item">
      <span class="op-value-before">Tage pro Launch</span>
      <svg class="op-value-arrow" viewBox="0 0 24 12" fill="none"><path d="M4 6H18" stroke="#6b7ff5" stroke-width="1.5" stroke-linecap="round"/><polygon points="16,2 22,6 16,10" fill="#6b7ff5"/></svg>
      <span class="op-value-after">3 Minuten</span>
    </div>
    <div class="op-value-item">
      <span class="op-value-before">Repetitive Handarbeit</span>
      <svg class="op-value-arrow" viewBox="0 0 24 12" fill="none"><path d="M4 6H18" stroke="#6b7ff5" stroke-width="1.5" stroke-linecap="round"/><polygon points="16,2 22,6 16,10" fill="#6b7ff5"/></svg>
      <span class="op-value-after">Automatisiert</span>
    </div>
    <div class="op-value-item">
      <span class="op-value-before">Unsichtbar in AI-Search</span>
      <svg class="op-value-arrow" viewBox="0 0 24 12" fill="none"><path d="M4 6H18" stroke="#6b7ff5" stroke-width="1.5" stroke-linecap="round"/><polygon points="16,2 22,6 16,10" fill="#6b7ff5"/></svg>
      <span class="op-value-after">Empfohlen von ChatGPT</span>
    </div>
  </div>
  <p class="op-body op-value-note">Die manuelle Datenpflege ist repetitive Arbeit, die das Team jede Woche aufs Neue bindet. Schnellere Launches wirken sich direkt auf den Umsatz aus.</p>
</div>

Step 2: Add value styles

.op-value-strip {
  display: flex;
  gap: 24px;
  justify-content: center;
  margin-bottom: 10px;
}

.op-value-item {
  display: flex;
  align-items: center;
  gap: 6px;
}

.op-value-before {
  font-size: 11px;
  color: var(--gray-400);
  text-decoration: line-through;
}

.op-value-arrow {
  width: 22px;
  height: 12px;
  flex-shrink: 0;
}

.op-value-after {
  font-size: 11px;
  font-weight: 700;
  color: var(--gray-800);
}

.op-value-note {
  text-align: center;
  font-size: 11px;
  color: var(--gray-500);
}

Step 3: Verify and commit

git add teveo-pitch-v3/src/pages/onepager.astro
git commit -m "feat: add value before/after section to onepager"

Task 7: Add TEAM FOOTER section

Files:

  • Modify: teveo-pitch-v3/src/pages/onepager.astro

Step 1: Add team footer

Pushed to bottom of the page with margin-top: auto. Same team data as existing deck.

<!-- TEAM FOOTER -->
<div class="op-team">
  <div class="op-team-divider"></div>
  <div class="op-team-row">
    <div class="op-team-member">
      <img src="/images/team/elena.png" alt="Elena Gonzalo Saul" class="op-team-photo" />
      <div class="op-team-info">
        <span class="op-team-name">Elena Gonzalo Saul</span>
        <span class="op-team-email">elena@ultrarelevant.com</span>
      </div>
    </div>
    <div class="op-team-member">
      <img src="/images/team/Falco.png" alt="Falco Schneider" class="op-team-photo" />
      <div class="op-team-info">
        <span class="op-team-name">Falco Schneider</span>
        <span class="op-team-email">falco@ultrarelevant.com</span>
      </div>
    </div>
    <div class="op-team-url">
      <span class="op-team-url-label mono">PRODUKTDATEN-TOOL</span>
      <a href="https://productdata.ultrarelevant.com/de" class="op-team-link">productdata.ultrarelevant.com/de</a>
    </div>
  </div>
</div>

Step 2: Add team footer styles

.op-team {
  margin-top: auto;
  padding-top: 16px;
}

.op-team-divider {
  height: 1px;
  background: var(--gray-200);
  margin-bottom: 14px;
}

.op-team-row {
  display: flex;
  align-items: center;
  gap: 28px;
}

.op-team-member {
  display: flex;
  align-items: center;
  gap: 10px;
}

.op-team-photo {
  width: 36px;
  height: 36px;
  border-radius: 50%;
  object-fit: cover;
  border: 1px solid var(--gray-200);
}

.op-team-info {
  display: flex;
  flex-direction: column;
  gap: 1px;
}

.op-team-name {
  font-size: 12px;
  font-weight: 600;
  color: var(--gray-900);
}

.op-team-email {
  font-size: 10px;
  color: #6b7ff5;
  font-weight: 500;
}

.op-team-url {
  margin-left: auto;
  display: flex;
  flex-direction: column;
  align-items: flex-end;
  gap: 2px;
}

.op-team-url-label {
  font-size: 8px;
  font-weight: 700;
  letter-spacing: 1px;
  color: var(--gray-400);
}

.op-team-link {
  font-size: 11px;
  color: #6b7ff5;
  font-weight: 500;
  text-decoration: none;
}

Step 3: Verify - Check full page renders with all sections. Verify it fits within A4 height.

Step 4: Commit

git add teveo-pitch-v3/src/pages/onepager.astro
git commit -m "feat: add team footer to onepager"

Task 8: Final polish - spacing adjustments and print verification

Files:

  • Modify: teveo-pitch-v3/src/pages/onepager.astro

Step 1: Visual review and spacing tweaks

Open localhost:4321/onepager and check:

  • All text fits within A4 bounds (no overflow)
  • Section spacing is balanced
  • Pipeline visual is vertically centered in its column
  • GEO highlight box stands out but doesn't overwhelm
  • Team footer sits at the bottom

Adjust margins/paddings as needed to balance the page.

Step 2: Test print/PDF export

Cmd+P in browser, verify it fits on one A4 page with no page break.

Step 3: Final commit

git add teveo-pitch-v3/src/pages/onepager.astro
git commit -m "feat: polish onepager spacing and print layout"

VALUE PROP SLIDES

decks/pitch-deck-onepager/VALUE-PROP-SLIDES.md

Value Proposition Slides Audience: User (PM / Category Manager) Slide 1: ICP Claim: "High volume. Many fields. Begin where it's obvious." Explainer: EAN, dimensions first. Translations, claims later. ...

Value Proposition Slides

Audience: User (PM / Category Manager)


Slide 1: ICP

Claim: "High volume. Many fields. Begin where it's obvious."

Explainer: EAN, dimensions first. Translations, claims later.


Slide 2: Problem

Claim: "The product data already exists. You just have to type it again."

Explainer: From 4 PDFs, 6 sheets, 5 mails into 52 PIM fields.


Slide 3: Solution

Claim: "AI does the typing. You handle the exceptions."

Explainer: 90% less manual work. 40% faster time to market.


Slide 4: Differentiation

Claim: TBD

Explainer: TBD


Slide 5: Vision

Claim: "The data cable. Not the storefront."

Explainer: We make your data AI-ready. AI commerce does the rest.


Notes

ICP Slide — Who This Is For

  • High volume product launches
  • Many fields per product
  • Land & expand: simple fields first, expert fields later

Problem Slide — Core Insight

  • The absurdity: data exists, you retype it
  • The consequence: launches wait
  • Emotion: frustration, waste

Solution Slide — Core Promise

  • AI does grunt work (typing)
  • Humans do expert work (exceptions)
  • Result: less manual, faster launches

The Mirror

Problem Solution
You type it again AI does the typing
Launches wait 40% faster time to market
Manual work 90% less manual work

PITCH DECK PLAN

decks/pitch-deck-onepager/PITCH-DECK-PLAN.md

Ultra Relevant - Pitch Deck Plan The Problem With Visual-First Designing slides before crystallizing what each slide must prove leads to: Beautiful slides that overlap Missing key arguments No clear f...

Ultra Relevant - Pitch Deck Plan

The Problem With Visual-First

Designing slides before crystallizing what each slide must prove leads to:

  • Beautiful slides that overlap
  • Missing key arguments
  • No clear flow per audience

Solution: Arguments first, then visualization.


The Framework: Argument Blocks

Write each slide as a claim + evidence before any design:

SLIDE: [Name]
CLAIM: One sentence. What must the audience believe after this slide?
EVIDENCE: What proves it?
AUDIENCE: Customer / Investor / Both

Three Decks (Different Sequences, Same Blocks)

Customer Pitch (8-10 slides)

Focus: Your pain → Our solution → Proof it works

# Argument Claim
1 AI Commerce Shift The way people buy is changing - AI recommends, customer buys
2 Broken Data Supply Chain Your current product data flow breaks at PIM ingestion
3 Coordination Nightmare 5 departments, 1 agency, scattered data, endless chasing
4 Quantified Pain 100 products × 20 fields × 6 languages = impossible to scale
5 Our Solution We automate PIM ingestion with an AI agent
6 How It Works Dump files → AI extracts → Human approves → PIM updated
7 Validation Companies like you already use this
8 Entry Strategy Start low-risk with simple fields, expand from there

Investor Pitch (12-15 slides)

Focus: Market → Pain → Solution → Moat → Team → Ask

# Argument Claim
1 AI Commerce Shift $300B market shift from search to AI-native buying
2 Fool's Gold AI visibility tracking is a distraction (we know, we built it)
3 The Real Problem PIM ingestion is the universal choke point
4 Universal Pain Every enterprise we talked to confirms this is unsolved
5 Coordination Nightmare 5 departments, scattered data, expert knowledge required
6 Why Incumbents Can't Win PIMs are database-first, feed mgmt is legacy-locked
7 Our Solution AI ingestion agent on top of existing PIMs
8 How It Works Dump → Extract → Approve → Sync
9 The Vision Phase 1: Painkiller → Phase 2: Data Cable → Phase 3: Feedback Loop
10 Traction Validation interviews, paying customers, strategic advisors
11 Team Falco (CEO), Elena (COO), Yusuf (CTO)
12 The Ask €500k at €4m pre-money, €260k committed

Value Proposition (3 slides)

Focus: Positioning essence

# Argument Claim
1 The Shift AI Commerce changes buying behavior fundamentally
2 Our Position AI layer on top of PIM → direct path to AI Commerce
3 Why Us AI-native, no legacy, born from this exact problem

Argument Blocks (Master List)

BLOCK: AI Commerce Shift

  • CLAIM: Buying behavior is moving from search to AI recommendation
  • EVIDENCE:
    • OpenAI shopping protocol announced
    • Google Argentic Commerce launched
    • 70% of purchases AI-influenced by 2026
    • "AI recommends → customer buys" replaces "browse → search → compare → buy"
  • AUDIENCE: Both
  • EXISTING SLIDES: hero-badge-stat, why-we-win

BLOCK: Fool's Gold (AI Visibility is a Distraction)

  • CLAIM: AI visibility tracking is superficial - the real problem is deeper
  • EVIDENCE:
    • We built ultra-relevant.com, scaled to 40 companies, 6 paying in 2 months
    • Ticket sizes €8-200/month = trend-riding, not real pain
    • Peak AI raised $21M, YC funding 4th prompt tracker, Profound got Sequoia
    • Real problem: hallucination risk, AI can't compare products without good data
  • AUDIENCE: Investor
  • EXISTING SLIDES: origin-tracker-pivot

BLOCK: The Data Supply Chain

  • CLAIM: Product data flows through a broken chain, and we traced exactly where it breaks
  • EVIDENCE:
    • Chain: ERP → PIM → Feed Syndication → Marketplaces → AI
    • Analyzed with global enterprises
    • The bottleneck is always PIM ingestion
  • AUDIENCE: Both
  • EXISTING SLIDES: supplychain-teams-mapped, supplychain-flow-bottleneck

BLOCK: The Coordination Nightmare

  • CLAIM: Filling PIM fields is a painful coordination problem across departments
  • EVIDENCE:
    • 5 different departments + 1 agency involved
    • Unstructured data: TXT, images, PDFs, spreadsheets
    • Error-prone, delays are costly/business-critical
    • Can't hire an intern - requires specialized knowledge
  • AUDIENCE: Both
  • EXISTING SLIDES: problem-manual-coordination, problem-coordination-nightmare

BLOCK: Expert Knowledge Required

  • CLAIM: PIM fields require specialized expertise, not just data entry
  • EVIDENCE:
    • Translation data is marketing/sales
    • Must avoid claims (legal)
    • Marketplace-specific lengths
    • SEO optimization opportunities
    • Industries: Cosmetics, Consumer Electronics, Furniture, Sporting Goods, Manufacturing
  • AUDIENCE: Both
  • EXISTING SLIDES: target-customer

BLOCK: Quantified Pain

  • CLAIM: The scale of manual work is mathematically impossible
  • EVIDENCE:
    • 100 products × 20 fields × 6 languages = massive expert decisions
    • Under time pressure for launches
    • Validated with Essence, Cosnova, GHD, Kellogg's, Bosch
  • AUDIENCE: Both
  • EXISTING SLIDES: quantified-fte-painpoints

BLOCK: Why Incumbents Can't Win

  • CLAIM: PIMs and Feed Management companies cannot solve this
  • EVIDENCE:
    • PIMs (Akeneo, Salsify, Pimcore): 10+ years old, database-first, 20-person AI team too slow
    • Feed Mgmt (Tradebyte, Productsup): 80% effort on API maintenance, built for marketplaces
    • "This is the single biggest operational pain point we have." - Ex-CPO Productsup
    • They can't disrupt themselves - revenue tied to legacy
  • AUDIENCE: Investor (strong), Customer (light)
  • EXISTING SLIDES: competitive-landscape, why-we-win

BLOCK: Our Solution

  • CLAIM: We automate PIM ingestion with an AI agent that sits on top of existing systems
  • EVIDENCE:
    • Don't replace PIM, automate ingestion
    • AI-native architecture (not bolted on)
    • Human-in-the-loop approval
    • Saves 80% of expert time
  • AUDIENCE: Both
  • EXISTING SLIDES: solution-flow-autofill, solution-feed-queue

BLOCK: How It Works

  • CLAIM: Simple flow: dump files → AI extracts → human approves → PIM updated
  • EVIDENCE:
    • Input: TXT, PNG, XLS, PDF into bucket
    • Process: AI understands product graph, extracts, confidence scores
    • Output: Approval UI, send to expert, one-click accept
  • AUDIENCE: Both
  • EXISTING SLIDES: howitworks-steps, solution-review-interface

BLOCK: The Vision (AI Data Cable)

  • CLAIM: We're building the infrastructure layer for AI Commerce
  • EVIDENCE:
    • Phase 1: Painkiller (PIM ingestion)
    • Phase 2: Data Cable (export to ChatGPT, Google, Meta, Perplexity)
    • Phase 3: Feedback Loop (payment protocol, reverse write to ERP, optimization)
  • AUDIENCE: Investor
  • EXISTING SLIDES: vision-future, vision-manifesto, roadmap-timeline

BLOCK: Why PIM is Perfect

  • CLAIM: The PIM is the ideal place for AI automation
  • EVIDENCE:
    • No PII = no risk
    • No central owner = AI can be the owner
    • Direct connections to all systems
    • Can build custom APIs
  • AUDIENCE: Investor (technical validation)
  • EXISTING SLIDES: pim-advantage

BLOCK: Validation & Traction

  • CLAIM: We've validated this with real enterprises
  • EVIDENCE:
    • Interviews: Essence, Cosnova, GHD, Kellogg's, Bosch, Vela, Mann & Hummel
    • ultra-relevant.com: 40 companies, 6 paying customers in 2 months
    • Advisors: Ex-CPO Productsup
    • Pricing: $8k/month + usage
  • AUDIENCE: Investor
  • EXISTING SLIDES: validation-traction

BLOCK: Team

  • CLAIM: We're the right team to solve this
  • EVIDENCE:
    • Falco (CEO): WHU, built AI products for manufacturing, built ultra-relevant.com solo
    • Elena (COO): WHU, VC experience, GTM beast, industry contacts
    • Yusuf (CTO): Rust engineer, knowledge graphs for code → product graphs
  • AUDIENCE: Investor
  • EXISTING SLIDES: (needs slide or part of CTA)

BLOCK: The Ask

  • CLAIM: We're raising to build the AI Data Cable
  • EVIDENCE:
    • €500k at €4m pre-money
    • €260k committed (family office, owns retailer)
    • Looking for 5 strategic angels from industry
  • AUDIENCE: Investor
  • EXISTING SLIDES: cta-contact

Current Slide Inventory (21 slides)

Slide Primary Block Status
hero-badge-stat AI Commerce Shift Built
origin-tracker-pivot Fool's Gold Built
validation-traction Validation & Traction Built
supplychain-teams-mapped Data Supply Chain Built
supplychain-flow-bottleneck Data Supply Chain Built
problem-manual-coordination Coordination Nightmare Built
problem-coordination-nightmare Coordination Nightmare Built
problem-monster Coordination Nightmare Built
quantified-fte-painpoints Quantified Pain Built
target-customer Expert Knowledge / ICP Built
solution-flow-autofill Our Solution Built
solution-feed-queue Our Solution Built
solution-review-interface How It Works Built
howitworks-steps How It Works Built
competitive-landscape Why Incumbents Can't Win Built
why-we-win AI Commerce Shift + Incumbents Built
vision-future The Vision Built
vision-manifesto The Vision Built
pim-advantage Why PIM is Perfect Built
roadmap-timeline The Vision Built
cta-contact The Ask Built

Gaps to Fill

Missing Block Needed For Priority
Team slide Investor Pitch High
The Ask (detailed) Investor Pitch High
Entry Strategy / Land & Expand Customer Pitch Medium

Next Steps

  1. Review each argument block - is the claim sharp enough?
  2. Check evidence for each block - do we have proof?
  3. Map blocks to three deck sequences
  4. Identify which slides need redesign vs. new slides
  5. Create brief for designer with claim + evidence per slide
  6. Build Customer Pitch sequence first (shortest path to revenue)

Anforderungen

decks/teveo-pitch/anforderungen.md

Teveo Pitch - Falcos Anforderungen (wörtlich) HINWEIS: Alles vor "--- AB HIER WÖRTLICH ---" stammt aus der Context-Zusammenfassung, weil die originalen Nachrichten durch Komprimierung verloren gegange...

Teveo Pitch - Falcos Anforderungen (wörtlich)

HINWEIS: Alles vor "--- AB HIER WÖRTLICH ---" stammt aus der Context-Zusammenfassung, weil die originalen Nachrichten durch Komprimierung verloren gegangen sind. Ab der Markierung sind es Falcos exakte Worte.

Aus der Zusammenfassung (nicht mehr im Original vorhanden)

  • "some are to long now. and get rid of the em dashes.. dont loose value.."
  • "das gefaellt mir noch nicht... reflektiere... zeig den Prozess.. mehr... Neue Produkte launchen ist schnell. Die Texte dafür nicht. mehrere leute, dauert lang.. viele involviert"
  • "ne jetzt hast du den prozess geaednert.. mach das nicht er war korrekt... sondern nur die headline soll problem resonaten... 4 produktlaunces im, repeitive arbeit"
  • "okay we need this as an input slide.. so they know we put it in.."
  • "actually what goes in is Lift Collection Rangeplan... this is what goes in..."
  • "schreib nicht wo sie herkommen.. also monday.."
  • "schau dir bitte unter dem pitch fuer alehop an welche icons wir verwendet haben und welches text design + welche icons fuer sheets und pdf und image... glassmorphic verwende die im pitch"
  • "ach junge naturlich soll sie nicht so chaotisch aussehen"
  • "okay this looks stoopid.. make it a list"
  • "produktbeschreibungen sollen Rangeplan Lift Longsleeve Heissen"
  • "Rangeplan Lift Longsleeve Produktbilder Modellgrößen & Size Chart Materialinformationen Metafield-Informationen bring da die filetypes noch mit rein"

--- AB HIER WÖRTLICH ---


Kontext und Mails (von Falco geteilt)

  • Comprehensability: Understanding Marketing People
  • Optionality: Offering Catering to their needs to Ensure Perfect Fit

Hallo Elena, Vielen Dank! Wir gucken uns diese Woche noch zwei Anbieter von der Messe an. Im Hintergrund suchen wir aber schon alle Dokumente zusammen, sodass wir Anfang kommender Woche eine Entscheidung treffen und starten nonnen. Parallel gucken wir uns den Prozess an, wie unser PIM Produktbeschreibungstexte erstellt. Da diese Entscheidung primär im Marketing getroffen werden wird und insbesondere die Erstellung UND die Anreichung im Hinblick auf Kl-Suchen hier in den Fokus stellen, schlage ich vor, dass ihr eine kurze Präsentation zu diesem Thema, eurem Prozess und den Abläufen dahinter vorbereitet. Dann können wir einen Termin mit dem Marketina hierzu aufsc uneuny Kann ale unterschiealicnen Anbieter direkt vergleichen. Mit freundlichen Grüßen / best regards

Hey Elena :-) Zu den Produktbeschreibungstexten schicke ich euch gerne unseren aktuellen Prozess. Einige Anbiee wir uns dafür angucken machen sogar Demos und Beispiel Prozessautomatisierungen. Mein Tipp an dich als freie Mentorin;-): Guckt euch bei großen Wettbewerbern mal an, wie die solche Pitches gestalten und machen (zB Cernel). Findet euren USP, nehmt ein Produkt aus unserem Shop und zeigt, wie ihr A: Den Prozess intern zwishcen Produkt und MArektingteam automatisiert und dadurch schneller macht (% Zeitersparnis) und B: Am Ende ein Text rauskommt, wo alle Geminis und Chat GPTs der Welt auf Teveo Kleidung aufmerksam wird (nehmt dafür eine Generische Suche wie "ich will zu einer Hohczeit in Italien und suche ein Kleid als Trauzeugin" - nur auf Teveo Produkte gemünzt. Dann habt ihr eine gute Chance. Schafft ihr das bis eow kommende Woche? Offizielle Mail ging eben schon raus - das mache ich hier Teveo unabhängig :-) Wir wollen eine 3000 Euro SaaS an sie verkaufen und pitchen gegen eine andere Firma… die auch AI Enrichment für Productdata machen… aber nicht so auf AI Commerce Generative Engine Optimis __ Anpassungsfähig und liefern schnell!

USP: Optimierung für AI ChatGPT, Gemini… AIs Finden. Und How we do it… + Sie können Produkt mitshapen bei uns. Wir nehmen wünsche gerne an, damit wir wirklich ihren Prozess genau abbilden. MORGEN STARTEN, Onboarding nichts nötig. kein Lernen der Plattform notwendig. Selbsterklaernend.. Pipeline Passend

https://teveo.com/en-gb/collections/leggings?utm_source=chatgpt.com

Intent based Usecases worunter das Produkt gefunden werden kann. !!!! + CHATGPT und GOOGLE AN sogar… Aber auch so kann AI Agent INTENT BASIERTE USECASES FUER DAS PRODUKT GENIERERN

NO NEW PROCESS BUT YOUR PROCESS FIXED!!!! THE PROBLEM IS NOT THE PIM, ITS THE GETTING THE DATA IS THE PIM!

KEINE SPEICHUNG DEINER PRODUKTDATEN… PROZSSING IN DEUTSCHLAND! PIPELINE! KEINE SPEICHSYSTEM

3000 MIN.

WIR SIND EINE ENRICHMENT PLATFORM FUER MIGRATION ENRICHMENT INTERN UND EINMAL PRODUKTE ___ WORKFLOW… FUER NEUE PRODUKTE​N ATTRIBUTEN DAMIT AI VERGELICHEN KANN + EXTERNE DATA SOURCE

SIMPLE INTERFACE… STRAIGHT FORWARD

WIR KOMMEN VON GEO… ULTRARELVEEANT… EDGE… USP! GEO STORYYYY!!!! PASST SO…


Anforderung: Story und Slides

den ChatGPT-Link als Beweis

das hier nciht.. Und das GEO beispiel auf Teveo Lift Longsleeve Gemuentzt.

Und dann wirklich hightlighten.. das wir ihren prozess so wie er ist... automatisieren mit unserer pipeline.. Kein onboarding haben.. sondern unser tool naechste woche nutzen koennen.. kein extra tool lernen muessen... wir den prozess genau abbikden... GEO optimieren, und wie wir das machen... und das wir ihre daten auch nicht speichernn., kein datenmodell noetig... oder hochladen von unmenden alter produktdaten.. trotzdem alles in europa laueft... dann das einsparen... ES soll klar sein dass wir ihren prozess besser verstanden haben die kette automatisieren...


Anforderung: Reflektiere

hast du das deck nochmal gelesen und dann gib mir bitte nochmal die reflection basierend auf dem was ich dir schrieb... ich will keinen chatgpt proof. das du utm parameter gesehen hast heist nix...

du bist ein master storyteller ... adaptiere basierend darauf... wie wuerdest du das maximum rausholen jetzt... DU bist verwirrtt... schau dir GENAU an was ich von dir verlangt habe und erstelle einen story..

SCHAU DIR GENAU AN WAS ICH GESAGT HABE..


Korrektur: Datenmodell und Migration

4 und 5 hello. Das Problem ist hier bei 4 und 5. Wir haben schon das Datenmodell, das ist drin, deswegen das musst du nicht sagen, aber 5, keine Migration alter Produktdaten nötig, klingt blöd, weil wir machen ein Migrationsprojekt für die, wo wir Altdaten aufbereiten.

Aber was viel wichtiger ist: Hier dass keine Altdaten hochgeladen werden müssen, not bei API oder CSV oder was auch immer jetzt in unserem Modell, damit sie diese Pipeline in einer Woche benutzen können.

Scheiß mal auf das, was ich jetzt hatte. Und denk jetzt bitte mir die wirklichen Kernaussagen. Du hast jetzt die Mail von Ines auch hier noch in dem Kontext und meine Kernaussagen, die mir wichtig sind.

Jetzt erstell bitte eine Story mit den Kernaussagen.


Detaillierte Story-Anforderung

Okay, diese Story, die du oben hast. Ich hätte jetzt gerne noch, dass du Extraktion, Mapping, Texterstellung und Übersetzung änderst.

Und das Ganze heißt: Was passiert bei dem eigentlichen Prozess bei denen? Die haben halt diese Dokumente, das ist der Rangeplan, das ist das Sizingchart, das ist das Foto und vielleicht Materialinformationen und Metafield-Informationen. Das sind PNG, Excel und JSON beispielsweise.

Dann benimmt unser Tool basierend auf deren Akaneo Struktur oder möglichen PLM und der Taxonomie bzw. Shopify bzw. dieses Datenmodell und mappt auch auf die richtigen Selektoren in ihrer Sprache. Dann generieren wir Marketingtexte genauso wie sie sie in der Vergangenheit generiert haben.

Das ist das erste: Basierend auf ihren Produkten verhenden wir das, was sie auf ihrer Website haben. Das andere ist, wir optimieren diese dann aber auch noch geotechnisch, indem wir uns anschauen, was für Intents, also worunter diese Produkte gelistet werden könnten, and uns dchauen, was entweder die Google-Suche oder ChatGPT interessiert.

Und dann wird dem Marketer vorgeschlagen, wie er weitere Intents zu den Marketing-Texten hinzufügen kann, damit das Ganze darunter gefunden werden kann. Ich mache hier das Beispiel in dem übertragenen Sinne. Ich gebe hier von diesem Hochzeitskalt gesprochen, aber du müsstest halt für den TV-Case. Okay?

Und dafür nehmen wir externe Daten-Sourcen und durchsuchen halt chatGPT und Gemini. Warum können wir das? Weil wir eine SaaS gebaut haben, ultrarelevant.com, das ist ein Geoengineering Tool. Basically checkt es, wo Brands in ChatGPT auftauchen.

Player in diesem Markt sind zum Beispiel PeakAI und Profound AI, wo 25 Prompts nachts getestet und analysiert werden. Das heißt, wir kennen uns, wir kommen aus dem Geo-Markt. Wollen aber in die welt der produkten, weil wir hier in größeren hebel sehen genau dieses keine woche nächste woche live ist. Das tool ist so schon eingestellt und die plattform ist wirklich selbst erklären, dass einst bzw keine plattform sondern workflow schmeißt vorne, was rein wird durchgeleiteten kommt, was raus passend zu ihrem prozess genauso wie er eigentlich war.

Also kannst du die story jetzt damit erweitern.

Du kannst jetzt hier auch ultrarelevant.com nochmal als Link nennen für das Geo-Tracking-Tool. Und dann, ich möchte jetzt nicht, dass du irgendwelche Slides änderst. Ich möchte, dass du mir jetzt erstmal den vollen Plan, die volle Story gibst mit allen Punkten, die ich genannt habe.

So, was mir dann immer noch fehlt ist, wie wir halt, warum sich das quasi lohnt für die. Basically, was die momentan machen ist, halte diese vier Files, verschiedene, drei Personen, reichern halt momentan Systeme, beispielsweise Shopify, mit diesen Informationen an.

Später wird das dann Ganze im PIM gemacht und da helfen wir halt, weil die Dokumente ja sofort ausgelesen werden. Und dann generieren wir aber auch die restlichen Felder, die für den PIM richtig wichtig sind. Okay, also mache es Step by Step genau passend in Ihren Prozess.

Und dann ist es momentan so, dass zum Beispiel Übersetzungen an eine Agentur geschickt werden. Die Übersetzungen können wir entweder mit AI-Modellen machen oder mit DeepL autom die müssen auch nur noch freigegeben werden. Und das heißt, am Ende soll klar sein, dass wir unfassbar viel Zeit sparen und unfassbar viel Geld sparen, weil wir diesen ganzen Prozess automatisieren mit unserer Pipeline, die quasi auch eine SaaS ist, aber passend zu ihrem Produkt.

Reflektiere, ich will, dass du keine Slides schreibst, sondern ich will, dass du mir eine Story schreibst, mit jetzt allen Informationen, die ich genannt habe. Wenn irgendwas ambiguous ist, dann sag mir Bescheid.


Korrektur: Sprache und Ton

So, jetzt hast du wieder irgendwelche Sachen gesagt, sowas wie Pipeline oder so einen Scheiß. Was soll das zum verfickten nochmal heißen?

Ich will auch nicht diese Sachen wie Extraktion, Mapping und Texterstellung und Übersetzung hören, okay? Sondern ich will, so wie ich es dir gesagt habe, erstell mir jetzt erstmal die Scheiß-Story ohne irgendwelche Slides, wo kein Marketing-Language oder was auch immer so ist, wo ich das lese und den Scheiß verstehe und lass jetzt mal nichts aus. Okay, es geht mir nur um die Worte. Ich will keine Labels, sondern ich will jetzt einfach mal eine Story absetzen und die in der Headline haben.

Die Headline muss nicht punchen oder was auch immer, sondern es soll einfach nur, please jetzt nimm einfach mal und liste alle meine Punkte jetzt auf. Meine Story, meine Punkte. In der normalen Story, in der normalen Language.


Korrektur: Prozess und Assumptions

Am Anfang fügen wir hinzu, Teveo macht das einmal die Woche. Es dauert auch nur eine Woche, beziehungsweise es dauert nicht Wochen, aber es ist jedes Mal dasselbe.

Und jetzt gibt es einen PIN, wo neu gepflegt werden soll. Jetzt sagst du immer noch so was wie "Dun map das Tool Daten auf deren Struktur". Wie kommst du darauf? Ich habe gesagt, du sollst keine Konzepte mehr verwenden.


Korrektur: Agentur und Website-Texte

Okay, dann erstens: Agentur ist vielleicht immer noch nötig, deswegen sag es so nicht. Ich weiß nicht, weil wir haben jetzt diesen Human Verification in the end. Deswegen lass das vielleicht komplett weg, sag einfach so, dass wir die Übersetzung auch machen. Die mit Deeper quasi rauskommt. Das Beste, was es auf dem Markt gibt, ist die Übersetzung. Wir können auch das KI-Modell nehmen.

Das Produktteam sammelt nicht die Produktdaten zusammen. So, woher willst du das wissen? Scheiß Assumption. Sondern die Daten, die ich da dann genannt habe, die sind auf jeden Fall da. Die dauert jede Woche ungefähr eine Woche, ja, dieser Prozess ist in wöchentlich getaktet. Kann es nicht sagen, dass eine Woche dauert, und wir wissen nicht, ob dann vielleicht zum Beispiel nur noch ein Pin gepflegt wird und gar nicht mehr Shopify, weil das prim dazu kommen. Deswegen also auch nicht zu schreiben.

Mir fehlt es hier die ganze Zeit so das ist der Prozess, und wir automatisieren euch jetzt diesen ganzen Prozess, mach mal schneller und setzen dabei ein und machen das ganze auch noch geoptimiert genau in den richtigen Feldern bzw. kann exportiert werden in euer Akaneo, in euer Shopify oder in das potenzielle PLM.

Genau, und dann haben wir auch das Zalando Farb Selector, das ist super, und das wir hier auch mappen auf die Felder, die wir für ChGPT und Gemini brauchen. Shopify Taxonomy, Akaneo Struktur und Workshop, was ChGPT und Gemini braucht. Warum ist das wichtig, wenn da Felder ausgefüllt sind und dann ChatGPT und Gemini gesendet werden oder ChatGPT und Gemini sich von einer Website zählen kann? Also höhere ausgefüllte Produktdaten beeinflussen GEO halt positiv oder dass ChatGPT halt deine Produkte kauft.

Gleichzeitig verändern wir aber auch noch Intent-Based die Marketingtexte und sowas, dass wir das besser abbilden. Das ist halt der zweite Punkt, ne? Marketing generieren wir basierend auf dem was unter Produkten bei tvo.com steht. Das ist jetzt auch descriptive. Es soll heißen, wir schreiben halt in eurem Tere KI schreibt da wie eure stehenden Texte. Ihr müsst das halt alles nicht einstellen, das ist auch wichtig.

Diese beziehende externe Datenquellen, ja wir analysieren, was im Chat-Support-Team-Game-In-Universität anfangs tatsächlich ausgegeben wird. Genau, PeakAI wird P-E-E-C, AI geschrieben, Profound AI. Das sind AI Rank Tracking Tools, erkläre und gibt da nochmal ultrarelevant.com an.

Und dann Übersetzung, genau, die Felder für das neue PIM generieren wir auch durch die Texte mit Zallanlagen. Das soll ja lieber oben stehen und dann, Katalog muss nicht eingepflegt werden, genau, dann hier wieder redundant. So, das ist, wir haben jetzt wieder Wiederholungen. Aber grundsätzlich ist das schon mal richtig.

Ich möchte, dass du es jetzt noch mehr sympathisch, wo ich sie, und halt in so einer quasi Mail schreibst. Nicht in der Mail, sondern in diesem Text, den du hier gerade geschrieben hast. Und lass bitte, wenn ich nichts ausschaut, bitte nochmal in den Kontext, die Infos, die ich dir gegeben habe, rgendwas vergessen haben.

Headlines

decks/teveo-pitch/headlines.md

Teveo Pitch Headlines 1. Jede Woche dasselbe: Drei Leute übertragen Produktdaten aus vier Dateien von Hand in Shopify. Die Übersetzungen gehen an eine externe Agentur. 2. Wir ändern euren Prozess nich...

Teveo Pitch Headlines

1. Jede Woche dasselbe: Drei Leute übertragen Produktdaten aus vier Dateien von Hand in Shopify. Die Übersetzungen gehen an eine externe Agentur.

2. Wir ändern euren Prozess nicht. Wir automatisieren exakt den Workflow, den ihr schon habt. Dateien rein, fertige Produktdaten raus. Wir sind keine Enrichment-Plattform, kein neues Tool lernen. Kein Katalog hochladen, Produktdaten werden von uns nie gespeichert. Verarbeitung ausschließlich in Deutschland.

3. Das Tool erkennt automatisch, was in Rangeplan, Bildern und Excel-Listen steht, und ordnet alles den richtigen Feldern zu. Für Akeneo, Shopify, Zalando.

4. Eine KI schreibt eure Produkttexte so, wie ihr sie schon immer geschrieben habt. Gelernt von eurer bestehenden Website, ohne manuelle Einstellungen.

5. Wir analysieren, was KIs bei Produktanfragen tatsächlich empfehlen, und geben eurem Marketer konkrete Vorschläge. Er entscheidet, was rein kommt.

6. Wir übersetzen alle Texte automatisch, entweder über unsere KI-Modelle oder über DeepL. Ihr könnt unsere Übersetzungen direkt nehmen, die Agentur weiterhin einbinden oder sie nur noch zur Freigabe involvieren.

7. Am Ende stehen saubere Daten bereit, die direkt in Akeneo, Shopify, Zalando, Google oder ChatGPT gehen. Jeder im Team kann den aktuellen Stand jederzeit per Link teilen, prüfen und herunterladen.

8. Produktdaten ready für den Launch in drei Minuten statt in Tagen. Gleichzeitig werden eure Produkte in ChatGPT und Google AI besser gefunden, weil die Daten vollständig sind und die GEO-Optimierung greift.

9. Kleines Team, große Ambitionen. Wir kommen mit ultrarelevant.com aus dem Generative Engine Optimization Bereich.

Storyold

decks/teveo-pitch/storyold.md

Teveo Pitch Story 1. Die Ausgangslage bei Teveo Teveo bringt wöchentlich neue Produkte heraus. Der aktuelle Prozess, um diese Produkte online zu bringen, ist manuell. Momentan tragen drei Personen die...

Teveo Pitch Story

1. Die Ausgangslage bei Teveo

Teveo bringt wöchentlich neue Produkte heraus. Der aktuelle Prozess, um diese Produkte online zu bringen, ist manuell. Momentan tragen drei Personen die Produktdaten aus vier verschiedenen Dateiquellen in Shopify ein: dem Rangeplan, Sizingcharts, Fotos sowie Material- und Metafield-Informationen, die in Excel-Tabellen, JSON-Dateien oder als PNG vorliegen.

In Zukunft soll diese Datenpflege in einem neuen Akeneo PIM stattfinden. Für die internationalen Shops werden die Texte bisher an eine externe Agentur geschickt.

2. Was die Software konkret macht (Der automatisierte Prozess)

Wir ändern den Prozess bei Teveo nicht. Stattdessen automatisieren wir exakt den Workflow, den sie ohnehin schon haben. Das Ziel ist eine simple Pipeline: Vorne werden die Dateien hochgeladen, das Tool verarbeitet sie in fünf Schritten, und hinten kommen die fertigen Daten heraus. Der gesamte Vorgang – vom Upload bis zum exportfähigen Datensatz – dauert mit unserer Pipeline ungefähr drei Minuten.

Wir sind keine komplexe Plattform, für die man Onboarding benötigt. Man gibt den Input hinein und erhält das Ergebnis.

Der Prozess läuft in fünf Schritten ab:

  1. Daten auslesen & zuordnen: Das Tool erkennt automatisch, was in den hochgeladenen Dokumenten steht, und ordnet die Informationen den korrekten Feldern zu. Für Akeneo, Shopify, Zalando oder spezifische Anforderungen wie den Zalando-Farb-Selector.

  2. Marketingtexte schreiben: Eine KI generiert die Produktbeschreibungen im Schreibstil, den Teveo auf der bestehenden Website verwendet. Ohne manuelle Einstellungen.

  3. Intent-basierte Optimierung: Die Produktdaten werden für KI-Suchmaschinen angereichert (dazu mehr in Section 3).

  4. Übersetzen: Alle Texte werden automatisch übersetzt, über DeepL oder eigene KI-Modelle. Das Team gibt nur noch frei.

  5. Export: Saubere, strukturierte Daten, bereit für Akeneo, Shopify, Zalando, Google und ChatGPT.

Zusammenarbeit: Verschiedene Personen können den aktuellen Stand per Link öffnen, prüfen und freigeben. Kein Versenden von Excel-Listen oder Word-Dokumenten.

3. Der USP: Generative Engine Optimization (GEO)

Neben der reinen Automatisierung sorgen wir dafür, dass die Produkte auch dort stattfinden, wo die Kunden suchen: in KIs wie ChatGPT, Gemini oder den Google AI Overviews. Das Ziel ist, dass Teveo-Produkte von der KI empfohlen werden, wenn Nutzer offene Fragen stellen.

Der Prozess funktioniert so: Zuerst generiert unser Tool ganz normal die Produkttexte im Teveo-Stil. Dann geht es an die Optimierung. Wir wollen zum Beispiel erreichen, dass das Lift Longsleeve unter Anfragen wie "lockeres Langarmshirt fürs Gym das ich auch im Alltag tragen kann" oder "Pump Cover Damen Baumwolle" gefunden wird. Dafür analysiert unsere Software mit externen Datenquellen, was ChatGPT und Gemini bei genau solchen Anfragen tatsächlich ausgeben und welche Kriterien sie in ihren Antworten hervorheben.

Basierend auf dieser Analyse bekommt der Marketer direkt Vorschläge, welche Intents er in den Text einbauen kann, damit das Produkt bei solchen Anfragen auftaucht. Er entscheidet was rein kommt.

Das können wir, weil wir ultrarelevant.com gebaut haben, ein GEO-Tracking-Tool, das analysiert, wo Brands in KI-Suchmaschinen auftauchen. Wir kommen aus diesem Markt und gehen jetzt in Produktdaten, weil wir dort einen größeren Hebel sehen.

4. Setup & Sicherheit

Das Tool ist sofort einsatzbereit. Es muss kein Katalog hochgeladen werden, keine API-Anbindung, kein CSV-Import. Das Datenmodell haben wir schon. Produktdaten werden nicht bei uns gespeichert. Die Verarbeitung läuft in Deutschland.

Wir sind keine Enrichment-Plattform, die man erst konfigurieren muss. Es gibt kein Felder-Mapping, keine Regeln, kein Onboarding. Wir haben den Prozess von Teveo verstanden und automatisieren ihn genau so, wie er heute schon läuft. Andere Anbieter bauen Plattformen mit vielen Konfigurationsmöglichkeiten für verschiedene Anwendungsfälle. Das klingt flexibel, bedeutet aber Einarbeitungszeit, Komplexität und laufenden Pflegeaufwand. Unser Ansatz ist das Gegenteil: Dateien rein, Ergebnis raus.

Wir schneiden die Pipeline auf euren Prozess zu und kümmern uns um die letzten Feinheiten im Datenmodell. Danach läuft alles. Keine weitere Arbeit auf Teveos Seite. Das Team kann sofort arbeiten, ohne ein neues Tool lernen zu müssen.

5. Bessere Daten, bessere Performance

Die manuelle Datenpflege ist repetitive Arbeit, die das Team jede Woche aufs Neue bindet. Mit der Pipeline entfällt das. Produktdaten sind ready für den Launch in drei Minuten statt in Tagen. Schnellere Launches wirken sich direkt auf den Umsatz aus.

Gleichzeitig werden Teveo-Produkte in ChatGPT und Google AI besser gefunden, weil die Produktdaten vollständig ausgefüllt sind und die GEO-Optimierung greift.

6. Das Team

Kleines Team, große Ambitionen. Wir kommen mit ultrarelevant.com aus dem Generative Engine Optimization Bereich.

Slide Plan

decks/teveo-pitch/slide-plan.md

Teveo Pitch — Slide-Plan v2 (Story-first, keine Demo-Screenshots) Was ist THE POINT 1. Wir verstehen euren Prozess exakt 2. Wir automatisieren ihn exakt so wie er ist 3. Wir machen auch GEO (das macht...

Teveo Pitch — Slide-Plan v2 (Story-first, keine Demo-Screenshots)

Was ist THE POINT

  1. Wir verstehen euren Prozess exakt
  2. Wir automatisieren ihn exakt so wie er ist
  3. Wir machen auch GEO (das macht sonst keiner für Produktdaten)
  4. Ihr könnt sofort starten

Die Story als Beats

Beat 1 (§1): "Die verstehen genau, wie wir arbeiten." Beat 2 (§2 Intro): "Die ändern nichts, die machen es nur schneller." Beat 3 (§2 Schritte): "So läuft das ab. 5 Schritte, 3 Minuten." Beat 4 (§3): "Die machen nicht nur Automatisierung — ChatGPT und Gemini empfehlen dann UNSERE Produkte." Beat 5 (§3): "Die wissen wovon sie reden, die kommen aus dem GEO-Markt." Beat 6 (§4): "Ich kann nächste Woche starten, ohne irgendwas vorzubereiten." Beat 7: "Let's talk."


Slides

SLIDE 1: TITLE

┌─────────────────────────────────────────────────┐
│                                                 │
│   [TEVEO]  |  [ULTRARELEVANT]                   │
│                                                 │
│   Produkttexte die verkaufen.                   │
│   Sichtbar in der KI-Suche.                    │
│                                                 │
│   NOETIQ ONE GmbH · März 2026                   │
└─────────────────────────────────────────────────┘

SLIDE 2: EUER PROZESS (Story §1)

┌─────────────────────────────────────────────────┐
│  Wir kennen euren Prozess                [logos] │
│                                                 │
│  TEXT:                        DATEIEN:           │
│                                                 │
│  Jede Woche neue Produkte.    [Rangeplan.pdf]   │
│  3 Personen tragen die              ↘           │
│  Daten aus 4 Quellen          [Size_Chart.xlsx] │
│  in Shopify ein.                    ↘           │
│                               [Produktbild.png] │
│  Neues Akeneo PIM kommt.           ↘           │
│  Übersetzungen gehen          [Material.xlsx]   │
│  an eine externe Agentur.           ↘           │
│                               [Metafield.json]  │
│                                     ↓           │
│                                 → SHOPIFY       │
└─────────────────────────────────────────────────┘

Split-Layout. Links Story §1 als Bullets. Rechts: ihre 4 Dateitypen fließen in Shopify. Empathie, kein Problem-Framing.


SLIDE 3: WIR AUTOMATISIEREN IHN (Story §2 Intro)

┌─────────────────────────────────────────────────┐
│  Euer Prozess, automatisiert             [logos] │
│                                                 │
│  Wir ändern nichts. Wir automatisieren          │
│  exakt den Workflow, den ihr schon habt.        │
│                                                 │
│  ┌───────────┐              ┌────────────────┐  │
│  │ Eure      │              │ Fertige Daten  │  │
│  │ Dateien   │              │                │  │
│  │           │  ─ 3 Min ─►  │ → Akeneo PIM   │  │
│  │ PDF XLSX  │              │ → Shopify      │  │
│  │ PNG JSON  │              │ → Zalando      │  │
│  │           │              │ → Google Gemini│  │
│  └───────────┘              └────────────────┘  │
│                                                 │
│  Keine Plattform. Kein Onboarding.              │
│  Input rein, Ergebnis raus.                     │
└─────────────────────────────────────────────────┘

Die Transformation. Gleiche Files wie Slide 2, aber jetzt durch uns → fertige Daten in 3 Minuten. Output-Seite zeigt wohin: Akeneo, Shopify, Zalando, Gemini.


SLIDE 4: WAS DRIN PASSIERT (Story §2 — die 5 Schritte)

┌─────────────────────────────────────────────────┐
│  Was in diesen 3 Minuten passiert        [logos] │
│                                                 │
│  1  Daten auslesen & zuordnen                   │
│     Erkennt Infos aus Rangeplan, Bildern,       │
│     Excel. Ordnet sie den Feldern zu:           │
│     Akeneo PIM, Shopify, PLM, Zalando.          │
│                                                 │
│  2  Marketingtexte schreiben                    │
│     KI schreibt im Teveo-Stil. Gelernt          │
│     von eurer Website, kein Setup nötig.        │
│                                                 │
│  3  GEO-Optimierung                             │
│     Texte so anreichern, dass ChatGPT           │
│     und Gemini Teveo empfehlen.                 │
│                                                 │
│  4  Übersetzen                                  │
│     DeepL oder eigene KI-Modelle.               │
│     Team gibt am Ende nur noch frei.            │
│                                                 │
│  5  Fertige Daten                               │
│     Export in Akeneo, Shopify, Zalando.          │
└─────────────────────────────────────────────────┘

Die 5 Schritte als nummerierte Liste. Kein Screenshot nötig. Die Beschreibung IST der Inhalt. Jeder Schritt = ein Satz.


SLIDE 5: GEO — DER UNTERSCHIED (Story §3)

┌─────────────────────────────────────────────────┐
│  Sichtbar in ChatGPT, Gemini,            [logos] │
│  Google AI Overviews                            │
│                                                 │
│  Zuerst generiert das Tool die Texte            │
│  im Teveo-Stil. Dann optimieren wir sie,        │
│  damit KI-Suchen Teveo empfehlen.               │
│                                                 │
│  ┌────────────────────────────────────────┐      │
│  │                                        │      │
│  │  Lift Longsleeve                       │      │
│  │                                        │      │
│  │  "lockeres Langarmshirt fürs Gym       │      │
│  │   das ich auch im Alltag tragen kann"  │      │
│  │                                        │      │
│  │  "Pump Cover Damen Baumwolle"          │      │
│  │                                        │      │
│  │  → Der Marketer bekommt Vorschläge,    │      │
│  │    welche Intents er einbauen kann.     │      │
│  │    Er entscheidet was rein kommt.       │      │
│  │                                        │      │
│  └────────────────────────────────────────┘      │
└─────────────────────────────────────────────────┘

DER Wow-Moment. Konkretes Beispiel mit dem Lift Longsleeve. Keine Theorie, sondern "so sieht das aus für EUER Produkt."


SLIDE 6: WARUM WIR DAS KÖNNEN (Story §3 Credibility)

┌─────────────────────────────────────────────────┐
│  Wir kommen aus dem GEO-Markt            [logos] │
│                                                 │
│  ultrarelevant.com                              │
│  Ein GEO-Tracking-Tool das checkt,              │
│  wo Brands in ChatGPT auftauchen.               │
│                                                 │
│  In diesem Markt sitzen auch                    │
│  Peec AI und Profound AI — hunderte             │
│  Prompts testen und analysieren.                │
│                                                 │
│  Wir gehen jetzt in Produktdaten,               │
│  weil wir da einen größeren Hebel sehen.        │
│                                                 │
│  ultrarelevant.com                              │
└─────────────────────────────────────────────────┘

Herkunft. Credibility. Warum wir GEO können. Kurz und klar. Keine Feature-Liste.


SLIDE 7: NÄCHSTE WOCHE LIVE (Story §4 + §2 Collaboration)

┌─────────────────────────────────────────────────┐
│  Nächste Woche live                      [logos] │
│                                                 │
│  ✓ Kein Onboarding nötig                        │
│  ✓ Keine alten Produktdaten hochladen           │
│  ✓ Kein Katalog, keine API, kein CSV-Import     │
│  ✓ Datenmodell haben wir schon                  │
│  ✓ Produktdaten werden nicht gespeichert         │
│  ✓ Verarbeitung in Deutschland                  │
│                                                 │
│  ┌────────────────────────────────────┐          │
│  │ Zusammenarbeit: Bearbeitungsstand  │          │
│  │ per Link teilen. Verifizieren,     │          │
│  │ ergänzen. Keine Excel hin- und     │          │
│  │ herschicken.                       │          │
│  └────────────────────────────────────┘          │
└─────────────────────────────────────────────────┘

Jeder Einwand weg. Plus Collaboration als Bonus.


SLIDE 8: CTA + TEAM

┌─────────────────────────────────────────────────┐
│                                                 │
│     Manuellen Aufwand reduzieren.               │
│     Sichtbar in der KI-Suche.                   │
│                                                 │
│     Let's talk                                  │
│                                                 │
│  [Elena]  elena@ur    [Falco]  falco@ur         │
│                                                 │
│  [QR]  productdata.ultrarelevant.com            │
└─────────────────────────────────────────────────┘

Letzter Satz aus der Story als Headline.


Story-zu-Slide Mapping

Story Slide
§1 komplett Slide 2
§2 Intro ("ändern nicht, automatisieren, 3 Min") Slide 3
§2 Die 5 Schritte Slide 4
§2 Collaboration (Share-Button) Slide 7
§3 GEO-Beispiel (Lift Longsleeve Intents) Slide 5
§3 Credibility (ultrarelevant, Peec, Profound) Slide 6
§4 Setup & Sicherheit Slide 7
§4 Letzter Satz Slide 8 (CTA)

Was anders ist vs. v1

  • 8 statt 10 Slides — straffer
  • Keine Demo-Screenshots — Story redet, keine Produkt-UI
  • Keine Dropzone — war ein Produkt-Feature, kein Story-Beat
  • Slide 4 ist NEU — die 5 Schritte als klare nummerierte Liste, direkt aus der Story
  • Slide 5 (GEO) steht alleine — eigene Slide nur für das Lift Longsleeve Beispiel, der Wow-Moment
  • Alles ist 1:1 Story-Text — keine erfundenen Headlines, keine Marketing-Sprache

Design-Basis

Ale-hop Deck Design:

  • Font: Geist + Geist Mono
  • Header: h2 + Logos rechts (slide-top)
  • Split-Layout für Slide 2 (Text + glassmorphic File Cards)
  • Bento Flow für Slide 3 (Input → Output)
  • Nummerierte Liste für Slide 4
  • Styled Card für Slide 5 (GEO Intents)
  • Bullets für Slide 6
  • Checkmarks + Card für Slide 7
  • Team-Bar für Slide 8

Storytwo

decks/teveo-pitch-v3/storytwo.md

Teveo Pitch Story 1. Die Ausgangslage bei Teveo Teveo bringt wöchentlich neue Produkte heraus. Der aktuelle Prozess, um diese Produkte online zu bringen, ist manuell. Momentan tragen drei Personen die...

Teveo Pitch Story

1. Die Ausgangslage bei Teveo

Teveo bringt wöchentlich neue Produkte heraus. Der aktuelle Prozess, um diese Produkte online zu bringen, ist manuell. Momentan tragen drei Personen die Produktdaten aus vier verschiedenen Dateiquellen in Shopify ein: dem Rangeplan, Sizingcharts, Fotos sowie Material- und Metafield-Informationen, die in Excel-Tabellen, JSON-Dateien oder als PNG vorliegen.

In Zukunft soll diese Datenpflege in einem neuen Akeneo PIM stattfinden. Für die internationalen Shops werden die Texte bisher an eine externe Agentur geschickt.

2. Was die Software konkret macht (Der automatisierte Prozess)

Wir ändern den Prozess bei Teveo nicht. Stattdessen automatisieren wir exakt den Workflow, den sie ohnehin schon haben. Das Ziel ist eine simple Pipeline: Vorne werden die Dateien hochgeladen, das Tool verarbeitet sie in fünf Schritten, und hinten kommen die fertigen Daten heraus. Der gesamte Vorgang – vom Upload bis zum exportfähigen Datensatz – dauert mit unserer Pipeline ungefähr drei Minuten.

Wir sind keine Enrichment-Plattform, die man erst konfigurieren muss. Super einfach, kein Onboarding, kein Katalog hochladen. Produktdaten werden von uns nie gespeichert. Verarbeitung ausschließlich in Deutschland. Man gibt den Input hinein und erhält das Ergebnis.

Der Prozess läuft in fünf Schritten ab:

  1. Daten auslesen & zuordnen: Das Tool erkennt automatisch, was in den hochgeladenen Dokumenten steht, und ordnet die Informationen den korrekten Feldern zu. Für Akeneo, Shopify, Zalando oder spezifische Anforderungen wie den Zalando-Farb-Selector.

  2. Marketingtexte schreiben: Eine KI generiert die Produktbeschreibungen im Schreibstil, den Teveo auf der bestehenden Website verwendet. Ohne manuelle Einstellungen.

  3. Intent-basierte Optimierung: Die Produktdaten werden für KI-Suchmaschinen angereichert.

  4. Übersetzen: Wir übersetzen alle Texte automatisch, entweder über unsere KI-Modelle oder über DeepL. Ihr könnt unsere Übersetzungen direkt nehmen, die Agentur weiterhin einbinden oder sie nur noch zur Freigabe involvieren.

  5. Export: Saubere, strukturierte Daten, bereit für Akeneo, Shopify, Zalando, Google und ChatGPT. Jeder im Team kann den aktuellen Stand jederzeit per Link teilen, prüfen und herunterladen. Kein Versenden von Excel-Listen oder Word-Dokumenten.

3. Der USP: Generative Engine Optimization (GEO)

Neben der reinen Automatisierung sorgen wir dafür, dass die Produkte auch dort stattfinden, wo die Kunden suchen: in KIs wie ChatGPT, Gemini oder den Google AI Overviews. Das Ziel ist, dass Teveo-Produkte von der KI empfohlen werden, wenn Nutzer offene Fragen stellen.

Der Prozess funktioniert so: Zuerst generiert unser Tool ganz normal die Produkttexte im Teveo-Stil. Dann geht es an die Optimierung. Wir wollen zum Beispiel erreichen, dass das Lift Longsleeve unter Anfragen wie "lockeres Langarmshirt fürs Gym das ich auch im Alltag tragen kann" oder "Pump Cover Damen Baumwolle" gefunden wird. Dafür analysiert unsere Software mit externen Datenquellen, was ChatGPT und Gemini bei genau solchen Anfragen tatsächlich ausgeben und welche Kriterien sie in ihren Antworten hervorheben.

Basierend auf dieser Analyse bekommt der Marketer direkt Vorschläge, welche Intents er in den Text einbauen kann, damit das Produkt bei solchen Anfragen auftaucht. Er entscheidet was rein kommt.

Das können wir, weil wir ultrarelevant.com gebaut haben, ein GEO-Tracking-Tool, das analysiert, wo Brands in KI-Suchmaschinen auftauchen. Wir kommen aus diesem Markt und gehen jetzt in Produktdaten, weil wir dort einen größeren Hebel sehen.

4. Bessere Daten, bessere Performance

Die manuelle Datenpflege ist repetitive Arbeit, die das Team jede Woche aufs Neue bindet. Mit der Pipeline entfällt das. Produktdaten sind ready für den Launch in drei Minuten statt in Tagen. Schnellere Launches wirken sich direkt auf den Umsatz aus.

Gleichzeitig werden Teveo-Produkte in ChatGPT und Google AI besser gefunden, weil die Produktdaten vollständig ausgefüllt sind und die GEO-Optimierung greift.

5. Das Team

Kleines Team, große Ambitionen. Wir kommen mit ultrarelevant.com aus dem Generative Engine Optimization Bereich.

Slides

decks/teveo-pitch-v3/slides.md

Teveo Pitch Slides SLIDE 1 — Problem Headline: Jede Woche dasselbe: Drei Leute übertragen Produktdaten aus vier Dateien von Hand in Shopify. Die Übersetzungen gehen an eine externe Agentur. Visual: ``...

Teveo Pitch Slides

SLIDE 1 — Problem

Headline: Jede Woche dasselbe: Drei Leute übertragen Produktdaten aus vier Dateien von Hand in Shopify. Die Übersetzungen gehen an eine externe Agentur.

Visual:

┌───────────┐  ┌───────────┐  ┌───────────┐  ┌─────────┐
│ Rangeplan │  │  Sizing   │  │   Fotos   │  │  Excel  │
│   .pdf   │  │   .json   │  │   .png    │  │Metafield│
└─────┬─────┘  └─────┬─────┘  └─────┬─────┘  └────┬────┘
      |──────────┬───┴──────┬───────|              |
                 ▼          ▼                       ▼
           ┌─────────────────────────────────────────┐
           │  3 Personen · manuell · jede Woche      │
           └──────────────────┬──────────────────────┘
                              │
                 ┌────────────▼────────────┐
                 │  Export an Agentur      │
                 │  (Übersetzung)          │
                 └────────────┬────────────┘
                 ┌────────────▼────────────┐
                 │  Import zurück in       │
                 │  Systeme                │
                 └─────────────────────────┘

SLIDE 2 — Lösung

Headline: Wir ändern euren Prozess nicht. Wir automatisieren exakt den Workflow, den ihr schon habt. Dateien rein, fertige Produktdaten raus.

Visual:

┌────────┐    ┌────┐┌────┐┌────┐┌────┐┌────┐    ┌──────┐
│        │    │ 1. ││ 2. ││ 3. ││ 4. ││ 5. │    │      │
│Dateien │───▶│Aus-││Text││GEO ││Über││Ex- │───▶│Daten │
│Upload  │    │lese││    ││    ││setz││port│    │ready │
└────────┘    └────┘└────┘└────┘└────┘└────┘    └──────┘

Punkte darunter:

  • Keine Enrichment-Plattform. / Super einfach.
  • Kein Katalog hochladen. / Kein Onboarding.
  • Produktdaten nie gespeichert. / Verarbeitung in DE.

SLIDE 3 — Auslesen & Zuordnen

Label: SCHRITT 1

Headline: Das Tool erkennt automatisch, was in Rangeplan, Bildern und Excel-Listen steht, und ordnet alles den richtigen Feldern zu. Für Akeneo, Shopify, Zalando.

Visual:

┌──────────────────┐         ┌──────────────────────┐
│  Rangeplan.pdf  │         │  Akeneo PIM          │
│                  │         │                      │
│  Lift Longsleeve │         │  Produktname ✓       │
│  95% Cotton      │────────▶│  Material ✓          │
│  Farbe: Indigo   │         │  Farbe ✓             │
│  S / M / L / XL  │         │  Zalando-Selector ✓  │
│                  │         │  Größen ✓            │
└──────────────────┘         └──────────────────────┘

SLIDE 4 — Texte schreiben

Label: SCHRITT 2

Headline: Eine KI schreibt eure Produkttexte so, wie ihr sie schon immer geschrieben habt. Gelernt von eurer bestehenden Website, ohne manuelle Einstellungen.

Visual:

┌────────────────────────────────────────────────────┐
│                                                    │
│  "Das Lift Longsleeve verbindet lässigen           │
│   Oversize-Schnitt mit weichem Baumwolljersey.     │
│   Perfekt für Training und Alltag."                │
│                                                    │
└────────────────────────────────────────────────────┘

            Stil gelernt von: teveo.com

SLIDE 5 — GEO

Label: SCHRITT 3

Headline: Wir analysieren, was KIs bei Produktanfragen tatsächlich empfehlen, und geben eurem Marketer konkrete Vorschläge. Er entscheidet, was rein kommt.

Visual:

1. ANFRAGE              2. WAS KIs EMPFEHLEN
┌─────────────────┐     ┌─────────────────────────┐
│ "lockeres Lang-  │     │ ChatGPT hebt hervor:    │
│  armshirt fürs   │────▶│ - Baumwolle             │
│  Gym"            │     │ - Oversize-Schnitt      │
└─────────────────┘     │ - Gym + Alltag          │
                        └────────────┬────────────┘
                                     │
3. VORSCHLÄGE AN MARKETER            ▼
┌──────────────────────────────────────────────┐
│ Intent "Pump Cover Damen Baumwolle"    [+ ]  │
│ Intent "Gym Longsleeve Oversize"       [+ ]  │
│ Intent "Langarmshirt Training Alltag"  [+ ]  │
└──────────────────────────────────────────────┘

SLIDE 6 — Übersetzen

Label: SCHRITT 4

Headline: Wir übersetzen alle Texte automatisch, entweder über unsere KI-Modelle oder über DeepL. Ihr könnt unsere Übersetzungen direkt nehmen, die Agentur weiterhin einbinden oder sie nur noch zur Freigabe involvieren.

Visual:

              ┌──────────────────┐
              │  Produktname     │
              │  Beschreibung    │
              │  Bullets         │
              └────────┬─────────┘
    ┌────────┬─────────┼─────────┬────────┐
    ▼        ▼         ▼         ▼        ▼
┌──────┐ ┌──────┐ ┌──────┐ ┌──────┐ ┌──────┐
│EN 🇬🇧 │ │FR 🇫🇷 │ │IT 🇮🇹 │ │ES 🇪🇸 │ │NL 🇳🇱 │
└──────┘ └──────┘ └──────┘ └──────┘ └──────┘

SLIDE 7 — Export

Label: SCHRITT 5

Headline: Am Ende stehen saubere Daten bereit, die direkt in Akeneo, Shopify, Zalando, Google oder ChatGPT gehen. Jeder im Team kann den aktuellen Stand jederzeit per Link teilen, prüfen und herunterladen.

Visual:

                 ┌──────────────┐
                 │ Strukturierte│
                 │    Daten     │
                 └──────┬───────┘
 ┌────┬────┬────┬───────┼───────┬──────┬──────┐
 ▼    ▼    ▼    ▼       ▼       ▼      ▼      ▼
Akeneo Shopify PLM  Excel  Zalando Google ChatGPT

SLIDE 8 — Value

Headline: Produktdaten ready für den Launch in drei Minuten statt in Tagen. Gleichzeitig werden eure Produkte in ChatGPT und Google AI besser gefunden, weil die Daten vollständig sind und die GEO-Optimierung greift.

Visual:

    VORHER                       NACHHER

    Tage pro Launch              3 Minuten
    Repetitive Handarbeit        Automatisiert
    Unsichtbar in AI-Search      Empfohlen von ChatGPT

Subtext: Die manuelle Datenpflege ist repetitive Arbeit, die das Team jede Woche aufs Neue bindet. Schnellere Launches wirken sich direkt auf den Umsatz aus.


SLIDE 9 — Team

Headline: Kleines Team, große Ambitionen. Wir kommen mit ultrarelevant.com aus dem Generative Engine Optimization Bereich.

Visual:

             ┌──────────────────────┐
             │   ultrarelevant.com  │ (SCREENSHOT) SAAS.ULTRARELVANT.COM AUCH SCREENSHOT
             └──────────────────────┘

  ┌─────────────────┐       ┌─────────────────┐
  │    ┌──────┐     │       │    ┌──────┐     │
  │    │      │     │       │    │      │     │
  │    │  :)  │     │       │    │  :)  │     │
  │    └──────┘     │       │    └──────┘     │
  │  Elena          │       │  Falco          │
  │  elena@...      │       │  falco@...      │
  └─────────────────┘       └─────────────────┘