Obiguard provides a robust and secure gateway to facilitate the integration of various Large Language Models (LLMs) into your applications, including Perplexity AI APIs.
With Obiguard, you can take advantage of features like fast AI gateway access, observability, prompt management, and more, all while ensuring the secure management of your LLM API keys through a virtual key system.
You can limit citations to specific domains using the search_domain_filter parameter.
This feature is currently in closed beta and limited to 3 domains for whitelisting or blacklisting.
Copy
completion = client.chat.completions.create( messages=[{"role": "user", "content": "Tell me about electric cars"}], model="pplx-70b-chat", search_domain_filter=["tesla.com", "ford.com", "-competitors.com"] # Use '-' prefix for blacklisting)
Copy
completion = client.chat.completions.create( messages=[{"role": "user", "content": "Tell me about electric cars"}], model="pplx-70b-chat", search_domain_filter=["tesla.com", "ford.com", "-competitors.com"] # Use '-' prefix for blacklisting)
Enable image results in responses from online models using the return_images parameter:
Copy
completion = client.chat.completions.create( messages=[{"role": "user", "content": "Show me pictures of electric cars"}], model="pplx-70b-chat", return_images=True # Feature in closed beta)
Copy
completion = client.chat.completions.create( messages=[{"role": "user", "content": "Show me pictures of electric cars"}], model="pplx-70b-chat", return_images=True # Feature in closed beta)
Get related questions in the response using the return_related_questions parameter:
Copy
completion = client.chat.completions.create( messages=[{"role": "user", "content": "Tell me about electric cars"}], model="pplx-70b-chat", return_related_questions=True # Feature in closed beta)
Copy
completion = client.chat.completions.create( messages=[{"role": "user", "content": "Tell me about electric cars"}], model="pplx-70b-chat", return_related_questions=True # Feature in closed beta)
Determines how much search context is retrieved for the model.
Options are:
low: minimizes context for cost savings but less comprehensive answers.
medium: balanced approach suitable for most queries.
high: maximizes context for comprehensive answers but at higher cost.
Copy
completion = client.chat.completions.create( messages=[{"role": "user", "content": "What are the latest developments in electric cars?"}], model="sonar", web_search_options={ "search_context_size": "high" })
Copy
completion = client.chat.completions.create( messages=[{"role": "user", "content": "What are the latest developments in electric cars?"}], model="sonar", web_search_options={ "search_context_size": "high" })
Filters search results based on time (e.g., ‘week’, ‘day’).
Copy
completion = client.chat.completions.create( messages=[{"role": "user", "content": "What are the latest developments in electric cars?"}], model="sonar", search_recency_filter="<string>",)
Copy
completion = client.chat.completions.create( messages=[{"role": "user", "content": "What are the latest developments in electric cars?"}], model="sonar", search_recency_filter="<string>",)