← Back Insights

Using the Prompt Privacy API for Fun and Profit

Prompt Privacy

Brandon Corbin

LinkedIn

In this tutorial, we’re going to build a multi-model chain-of-thought blog post generator using the Prompt Privacy API (and Javascript).

By leveraging a chain-of-thought approach, along with the integration of multiple models, we’ll be able to create richer, more varied, and engaging content for our blog content.

Set Your API Key and Tenant ID

You can obtain your API Key and Tenant ID from the Console.

const apiKey = "9c49d15136114e469•••••••••••••";
const tenantId = "58f2d7b9-4d2f-48f0-bc37-•••••••••••••";

Selecting the Optimal Models

For blog posts, I have found that Gemini’s chain-of-thought methodology, pairs nicely with Claude 3 Sonnet’s writing style. It produces very natural and approachable content.

const MODELS = {
  claude3sonnet: "16843bec-88f8-4db5-b785-a2982eab4abf",
  gemini: "1115165e-f439-4c56-8aa8-7b1dc4a6e712"
};

Reusable Call Function

Let’s create a reusable call function to easily access various models. The messages is an array of messages following the OpenAI Message format. { role: "system|user|assistant", content: "message-content" }

const callAPI = async (messages, model) => {
  const call = await fetch("https://apigateway.promptprivacy.com/", {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      "api-key": apiKey,
    },
    body: JSON.stringify({
      tenant_id: tenantId,
      model: model,
      prompts: messages,
    }),
  });
  const res = await call.json();
  if (res.error) throw new Error(res.error);
  return res.response.choices[0];
};

Define Your Blog Post Idea

Describe your blog post idea with enough details to lead the LLM in a specific direction. In this example, we will request specific types of “prompting techniques” by including the “for example:” format.

const blogIdea = "Using different prompting techniques to get what you want from your Large Language Models. For example: Contextual Prompting, Interactive, Chain of Thought, Self-consistency, Few-shot Prompting and others. Include examples for using markdown code ```text format."

Chain of Thought

We’ll now ask the LLM to outline what a blog post on our chosen idea should contain. This allows the LLM to focus on the structure rather than the details, which we’ll address in the next steps. For this call, I found that Gemini performs exceptionally well, though your results may vary.

// Ask for a Chain of Thought
const chainOfThoughtCall = await callAPI([
  {
    role: "system",
    content: `You are a blog post chain of thought generator. You will be given an idea and you will generate a chain of thought.`
  },
  {
    role: "user",
    content: `Blog Idea: ${blogIdea}.`
  }
], MODELS.gemini);

// Get the Chain of Thought Text
const chainOfThought = chainOfThoughtCall.message.content;

Composing the blog post based on the CoT.

Now that we have a chain of thought, we can leverage it to compose our blog post.

// Compose the Blog Post with the Chain of Thought
const blogMarkdown = await callAPI([
  {
    role: "system",
    content: `You are a blog post generator. You will compose a blog post about ${blogIdea}. Use markdown for formatting.`
  },{
    role: "user",
    content: `Write a blog post, using a fun but professional tone, based on this chain of thought: ${chainOfThought}`
  }
], MODELS.claude3sonnet);

console.log(blogMarkdown);

And there you have it! A multi-model chain-of-thought blog post generator using the Prompt Privacy API. This method not only streamlines your content creation process but also enhances the quality and depth of your blog posts. Happy blogging!

The full script

const apiKey = "9c49d15136114e469•••••••••••••";
const tenantId = "58f2d7b9-4d2f-48f0-bc37-•••••••••••••";

const MODELS = {
  claude3sonnet: "16843bec-88f8-4db5-b785-a2982eab4abf",
  gemini: "1115165e-f439-4c56-8aa8-7b1dc4a6e712",
};

const callAPI = async (
  messages,
  model
) => {
  const call = await fetch("https://apigateway.promptprivacy.com/", {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      "api-key": apiKey,
    },
    body: JSON.stringify({
      tenant_id: tenantId,
      model: model,
      prompts: messages,
    }),
  });
  const res = await call.json();
  if(res.error) throw new Error(res.error);
  if(res.success === false) throw new Error(res.message);
  return res.response.choices[0];
};

const blogIdea = "Using different prompting techniques to get what you want from your Large Language Models. For example: Contextual Prompting, Interactive, Chain of Thought, Self-consistency, Few-shot Prompting and others. Include examples for using markdown code ```text format.";

// Ask for a Chain of Thought
const chainOfThoughtCall = await callAPI([{
    role: "system",
    content: `You are a blog post chain of thought generator. You will be given an idea and you will generate a chain of thought.`
  },
  {
    role: "user",
    content: `Blog Idea: ${blogIdea}.`
  }], MODELS.gemini);

const chainOfThought = chainOfThoughtCall.message.content;

// Compose the Blog Post with the Chain of Thought
const blogMarkdown = await callAPI([{
  role: "system",
  content: `You are a blog post generator. You will compose a blog post about ${blogIdea}. Use markdown for formatting.`
},{
  role: "user",
  content: `Write a blog post, using a fun but professional tone, based on this chain of thought: ${chainOfThought}`
}], MODELS.claude3sonnet)

console.log(blogMarkdown);

You will now have a freshly produced blog post that has skillfully leveraged two different LLMs to produce a more nuanced and unique content. By combining the strengths of these two language models, we can enhance the depth and originality of the content. You can see the exact post generated from this code here.

Ready to help your enterprise begin the transformation to the next revolution? Talk to one of our Solution Directors.

Brandon Corbin

LinkedIn

Start on the path to 70% - 90% percent effeciency gains.

Automate your Processes; Elevate your Workforce

Talk to a Solution Director →