Skip to main content

API Overview

As detailed in the documentation, offers a comprehensive suite of services to support application developers in leveraging Generative AI and Large Language Models (LLMs). Here's a summary of the key functionalities:

  1. Prompt Generation API (/prompt): This API endpoint allows users to send prompts and receive responses generated by OpenAI models. It supports various parameters such as content type, OpenAI key, API key, model selection, messages, token limits, temperature settings, and more to customize the prompt and response behavior.

  2. Prompt History Retrieval (/prompt/{start-date}/{end-date}/{is-cached}/{limit}/{sortBy}/{sortOrder}): This endpoint enables users to retrieve their prompt history based on certain parameters like start and end dates, whether the prompt is cached, the number of responses (limit), and sorting options.

  3. API Key Management (/api-keys, /api-keys/{api-key}): These endpoints are used for generating new API keys for users, retrieving all API keys generated by a user, and deleting a specific API key.

  4. Application Subscription and Management (/app/{app-id}/subscribe, /app): These endpoints allow users to subscribe to an application and create new applications.

  5. Usage Statistics (/usage, /usage/daily-stats): These endpoints provide insights into the usage statistics of applications, including daily stats, to help track and manage operational costs.

  6. User Profile Management (/profile/{username}): This endpoint is used for deleting a user from the database, along with any API keys generated by the user.

  7. Issue Reporting (/report-issue): This endpoint allows users to report issues directly to the PromptMule issue tracker on GitHub and Slack.

Each of these endpoints is designed to enhance the ease of integrating and utilizing Generative AI and LLMs in application development, focusing on efficiency, customization, and developer-friendly operations