Mario Guillen
nav.aboutnav.skillsnav.worknav.blognav.contact
Back to Blog
February 24, 2025—Mario Guillen

Contextual Search Using AI

AI generated search

Semantic Search with AI using text-embedding-3-small in an API

Introduction

Traditional search methods rely on keyword matching, which often fails to capture the intent and meaning behind a query. Semantic search, powered by AI, enables more accurate and context-aware search results by understanding the relationships between words. In this blog, we will explore how to implement semantic search using OpenAI's text-embedding-3-small model in an API.

Prerequisites

Before we dive in, ensure you have the following:

  • A working API (e.g., a Flask or FastAPI backend)

  • OpenAI API key

  • A database to store embeddings (e.g., PostgreSQL, Elasticsearch, or Pinecone)

  • Python 3.8+

Step 1: Install Dependencies

pip install openai fastapi uvicorn numpy pandas

Step 2: Generate and Store Embeddings

To perform semantic search, we first need to convert text data into vector representations (embeddings). We will use OpenAI's text-embedding-3-small model.

import openai import numpy as np def generate_embedding(text: str) -> np.ndarray: """Generate an embedding vector for a given text.""" response = openai.Embedding.create( input=text, model="text-embedding-3-small" ) return np.array(response['data'][0]['embedding']) # Example usage text = "Artificial intelligence in search engines" embedding_vector = generate_embedding(text) print(embedding_vector)

Store these embeddings in a database alongside the original text.

Step 3: Implement Semantic Search

To find relevant results, compute the embedding for a search query and compare it to stored embeddings using cosine similarity.

from numpy.linalg import norm def cosine_similarity(vec1, vec2): return np.dot(vec1, vec2) / (norm(vec1) * norm(vec2)) def search(query: str, stored_data: list): """Find the most relevant text based on semantic similarity.""" query_embedding = generate_embedding(query) results = [] for item in stored_data: text, stored_embedding = item['text'], np.array(item['embedding']) similarity = cosine_similarity(query_embedding, stored_embedding) results.append((text, similarity)) results.sort(key=lambda x: x[1], reverse=True) # Sort by relevance return results[:5] # Return top 5 results # Example stored data stored_data = [ {"text": "Machine learning for search optimization", "embedding": generate_embedding("Machine learning for search optimization")}, {"text": "Neural networks in search engines", "embedding": generate_embedding("Neural networks in search engines")}, {"text": "How to rank web pages effectively", "embedding": generate_embedding("How to rank web pages effectively")} ] # Example search query query = "AI in search engines" search_results = search(query, stored_data) print(search_results)

Step 4: Create an API for Semantic Search

We will use FastAPI to create a simple API endpoint for semantic search.

from fastapi import FastAPI app = FastAPI() @app.post("/search/") def search_api(query: str): results = search(query, stored_data) return {"results": results} if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8000)

Testing the API

Run the FastAPI server:

uvicorn main:app --reload

Send a request using cURL or Postman:

curl -X POST "http://localhost:8000/search/" -H "Content-Type: application/json" -d '{"query": "AI for search"}'

Conclusion

By leveraging OpenAI's text-embedding-3-small, we can build an AI-powered semantic search system that provides more relevant and meaningful search results. This approach is useful for applications like knowledge bases, document retrieval, and intelligent search engines. Integrating this into an API allows developers to scale and enhance search capabilities with ease.

© MARIO GUILLEN - 2026

iamguillen@gmail.com
GitHubLinkedIn
GUILLEN MARIO GUILLEN MARIO GUILLEN MARIO GUILLEN MARIO GUILLEN MARIO GUILLEN MARIO GUILLEN MARIO GUILLEN MARIO