Back to TILs

C++ openai — 01

Date: 2023-04-01Last modified: 2023-09-17

Table of contents


ChatGPT API is a natural language processing API based on the GPT-3.5 architecture. It allows you to integrate powerful text-based AI capabilities into your applications, including language translation, sentiment analysis, question answering, text completion, and more.

Using ChatGPT API, you can easily add advanced language processing features to your applications without having to develop them from scratch. This API works by accepting text-based inputs and returning relevant, human-like responses generated by the GPT-3.5 model, which has been trained on a massive corpus of human-written text.

To use the ChatGPT API, you simply need to send an HTTP request to the API endpoint with your input text as the payload, and the API will return a response in the form of text. You can integrate this API into your applications to provide your users with natural language interactions and responses, improving the overall user experience of your application.


The ChatGPT API can be used for a wide range of natural language processing (NLP) tasks. Here are some examples of the main use cases for the API:

These are just a few examples of the main use cases for the ChatGPT API. With its powerful NLP capabilities, the API can be used in a wide range of applications to improve the user experience and provide more natural language interactions.


#include "httplib.h"
#include "json.hpp"
#include "toml.hpp"
#include <iostream>
#include <string>
#include <vector>

using namespace nlohmann;
using namespace httplib;
using namespace std;

string generate_text( string api_key, string prompt )
  httplib::SSLClient cli( "" );

  auto    endpoint            = "/v1/completions"s;
  auto    auth_header         = "Bearer "s + api_key;
  auto    content_type_header = "application/json"s;
  Headers headers             = { { "Authorization", auth_header }, { "Content-Type", content_type_header } };

  json data
      = { { "model", "text-davinci-003" },
          // The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array
          // of token arrays.
          { "prompt", prompt },
          // The maximum number of tokens to generate in the completion.
          // The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a
          // context length of 2048 tokens (except for the newest models, which support 4096).
          { "max_tokens", 100 },
          // How many completions to generate for each prompt.
          { "n", 1 },
          // Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the
          // stop sequence.
          { "stop", nullptr },
          // What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random,
          // while lower values like 0.2 will make it more focused and deterministic.
          // We generally recommend altering this or top_p but not both.
          { "temperature", 0.5 } };

  auto res = cli.Post( endpoint, headers, data.dump(), content_type_header );

  if( !res || res->status != 200 ) {
    cerr << "Error: " << ( res ? res->status : -1 ) << endl;
    return "";

  json response_json = json::parse( res->body );

  { // Debugging
    ofstream dbg( "output/openai_01.json" );
    dbg << response_json.dump( 2 );

  string response_text = response_json["choices"][0]["text"];
  return response_text;
int main()
  auto config  = toml::parse_file( "openai_01.toml" );
  auto api_key = config["openai"]["api_key"].value_or( "YOUR_API_KEY"s );
  // I don't need to specify the language.
  // If I ask in portuguese the answer will be in portuguese as well.
  auto prompt   = "Qual é o significado da vida?"s;
  auto response = generate_text( api_key, prompt );

  cout << prompt << endl;
  cout << response << endl;
  return 0;

Possible output

Qual é o significado da vida?

Arquivo output/openai_01.json nao encontrado!