chore: update docs

pull/422/head
Travis Fischer 2023-03-01 23:32:33 -06:00
rodzic 1e4ddd6b84
commit 5c49e20e3e
14 zmienionych plików z 831 dodań i 222 usunięć

Wyświetl plik

@ -22,8 +22,7 @@
**new ChatGPTAPI**(`opts`)
Creates a new client wrapper around OpenAI's completion API using the
unofficial ChatGPT model.
Creates a new client wrapper around OpenAI's chat completion API, mimicing the official ChatGPT webapp's functionality as closely as possible.
#### Parameters
@ -32,21 +31,19 @@ unofficial ChatGPT model.
| `opts` | `Object` | - |
| `opts.apiBaseUrl?` | `string` | **`Default Value`** `'https://api.openai.com'` * |
| `opts.apiKey` | `string` | - |
| `opts.apiReverseProxyUrl?` | `string` | **`Default Value`** `undefined` * |
| `opts.assistantLabel?` | `string` | **`Default Value`** `'ChatGPT'` * |
| `opts.completionParams?` | `Partial`<[`CompletionParams`](../modules/openai.md#completionparams)\> | - |
| `opts.completionParams?` | `Partial`<`Omit`<[`CreateChatCompletionRequest`](../interfaces/openai.CreateChatCompletionRequest.md), ``"messages"`` \| ``"n"``\>\> | - |
| `opts.debug?` | `boolean` | **`Default Value`** `false` * |
| `opts.fetch?` | (`input`: `RequestInfo` \| `URL`, `init?`: `RequestInit`) => `Promise`<`Response`\> | - |
| `opts.getMessageById?` | [`GetMessageByIdFunction`](../modules.md#getmessagebyidfunction) | - |
| `opts.maxModelTokens?` | `number` | **`Default Value`** `4096` * |
| `opts.maxResponseTokens?` | `number` | **`Default Value`** `1000` * |
| `opts.messageStore?` | `Keyv`<`any`, `Record`<`string`, `unknown`\>\> | - |
| `opts.systemMessage?` | `string` | - |
| `opts.upsertMessage?` | [`UpsertMessageFunction`](../modules.md#upsertmessagefunction) | - |
| `opts.userLabel?` | `string` | **`Default Value`** `'User'` * |
#### Defined in
[src/chatgpt-api.ts:53](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/chatgpt-api.ts#L53)
[src/chatgpt-api.ts:49](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/chatgpt-api.ts#L49)
## Accessors
@ -60,7 +57,7 @@ unofficial ChatGPT model.
#### Defined in
[src/chatgpt-api.ts:353](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/chatgpt-api.ts#L353)
[src/chatgpt-api.ts:336](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/chatgpt-api.ts#L336)
`set` **apiKey**(`apiKey`): `void`
@ -76,7 +73,7 @@ unofficial ChatGPT model.
#### Defined in
[src/chatgpt-api.ts:357](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/chatgpt-api.ts#L357)
[src/chatgpt-api.ts:340](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/chatgpt-api.ts#L340)
## Methods
@ -84,17 +81,14 @@ unofficial ChatGPT model.
**sendMessage**(`text`, `opts?`): `Promise`<[`ChatMessage`](../interfaces/ChatMessage.md)\>
Sends a message to ChatGPT, waits for the response to resolve, and returns
the response.
Sends a message to the OpenAI chat completions endpoint, waits for the response
to resolve, and returns the response.
If you want your response to have historical context, you must provide a valid `parentMessageId`.
If you want to receive a stream of partial responses, use `opts.onProgress`.
If you want to receive the full response, including message and conversation IDs,
you can use `opts.onConversationResponse` or use the `ChatGPTAPI.getConversation`
helper.
Set `debug: true` in the `ChatGPTAPI` constructor to log more info on the full prompt sent to the OpenAI completions API. You can override the `promptPrefix` and `promptSuffix` in `opts` to customize the prompt.
Set `debug: true` in the `ChatGPTAPI` constructor to log more info on the full prompt sent to the OpenAI chat completions API. You can override the `systemMessage` in `opts` to customize the assistant's instructions.
#### Parameters
@ -111,4 +105,4 @@ The response from ChatGPT
#### Defined in
[src/chatgpt-api.ts:185](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/chatgpt-api.ts#L185)
[src/chatgpt-api.ts:157](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/chatgpt-api.ts#L157)

Wyświetl plik

@ -20,7 +20,6 @@
- [isFinal](ChatGPTError.md#isfinal)
- [statusCode](ChatGPTError.md#statuscode)
- [statusText](ChatGPTError.md#statustext)
- [type](ChatGPTError.md#type)
## Constructors
@ -67,7 +66,7 @@ node_modules/.pnpm/typescript@4.9.5/node_modules/typescript/lib/lib.es2022.error
#### Defined in
[src/types.ts:51](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L51)
[src/types.ts:47](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L47)
___
@ -77,7 +76,7 @@ ___
#### Defined in
[src/types.ts:50](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L50)
[src/types.ts:46](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L46)
___
@ -87,7 +86,7 @@ ___
#### Defined in
[src/types.ts:48](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L48)
[src/types.ts:44](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L44)
___
@ -97,14 +96,4 @@ ___
#### Defined in
[src/types.ts:49](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L49)
___
### type
`Optional` **type**: [`ChatGPTErrorType`](../modules.md#chatgpterrortype)
#### Defined in
[src/types.ts:52](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L52)
[src/types.ts:45](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L45)

Wyświetl plik

@ -36,7 +36,7 @@
#### Defined in
[src/chatgpt-unofficial-proxy-api.ts:19](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/chatgpt-unofficial-proxy-api.ts#L19)
[src/chatgpt-unofficial-proxy-api.ts:20](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/chatgpt-unofficial-proxy-api.ts#L20)
## Accessors
@ -50,7 +50,7 @@
#### Defined in
[src/chatgpt-unofficial-proxy-api.ts:65](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/chatgpt-unofficial-proxy-api.ts#L65)
[src/chatgpt-unofficial-proxy-api.ts:66](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/chatgpt-unofficial-proxy-api.ts#L66)
`set` **accessToken**(`value`): `void`
@ -66,7 +66,7 @@
#### Defined in
[src/chatgpt-unofficial-proxy-api.ts:69](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/chatgpt-unofficial-proxy-api.ts#L69)
[src/chatgpt-unofficial-proxy-api.ts:70](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/chatgpt-unofficial-proxy-api.ts#L70)
## Methods
@ -101,4 +101,4 @@ The response from ChatGPT
#### Defined in
[src/chatgpt-unofficial-proxy-api.ts:96](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/chatgpt-unofficial-proxy-api.ts#L96)
[src/chatgpt-unofficial-proxy-api.ts:97](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/chatgpt-unofficial-proxy-api.ts#L97)

Wyświetl plik

@ -7,8 +7,10 @@
### Properties
- [conversationId](ChatMessage.md#conversationid)
- [delta](ChatMessage.md#delta)
- [detail](ChatMessage.md#detail)
- [id](ChatMessage.md#id)
- [name](ChatMessage.md#name)
- [parentMessageId](ChatMessage.md#parentmessageid)
- [role](ChatMessage.md#role)
- [text](ChatMessage.md#text)
@ -21,7 +23,17 @@
#### Defined in
[src/types.ts:34](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L34)
[src/types.ts:40](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L40)
___
### delta
`Optional` **delta**: `string`
#### Defined in
[src/types.ts:34](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L34)
___
@ -31,7 +43,7 @@ ___
#### Defined in
[src/types.ts:35](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L35)
[src/types.ts:35](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L35)
___
@ -41,7 +53,17 @@ ___
#### Defined in
[src/types.ts:30](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L30)
[src/types.ts:30](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L30)
___
### name
`Optional` **name**: `string`
#### Defined in
[src/types.ts:33](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L33)
___
@ -51,7 +73,7 @@ ___
#### Defined in
[src/types.ts:33](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L33)
[src/types.ts:38](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L38)
___
@ -61,7 +83,7 @@ ___
#### Defined in
[src/types.ts:32](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L32)
[src/types.ts:32](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L32)
___
@ -71,4 +93,4 @@ ___
#### Defined in
[src/types.ts:31](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L31)
[src/types.ts:31](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L31)

Wyświetl plik

@ -0,0 +1,67 @@
[chatgpt](../readme.md) / [Exports](../modules.md) / [openai](../modules/openai.md) / ChatCompletionRequestMessage
# Interface: ChatCompletionRequestMessage
[openai](../modules/openai.md).ChatCompletionRequestMessage
**`Export`**
**`Interface`**
ChatCompletionRequestMessage
## Table of contents
### Properties
- [content](openai.ChatCompletionRequestMessage.md#content)
- [name](openai.ChatCompletionRequestMessage.md#name)
- [role](openai.ChatCompletionRequestMessage.md#role)
## Properties
### content
**content**: `string`
The contents of the message
**`Memberof`**
ChatCompletionRequestMessage
#### Defined in
[src/types.ts:178](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L178)
___
### name
`Optional` **name**: `string`
The name of the user in a multi-user chat
**`Memberof`**
ChatCompletionRequestMessage
#### Defined in
[src/types.ts:184](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L184)
___
### role
**role**: [`ChatCompletionRequestMessageRoleEnum`](../modules/openai.md#chatcompletionrequestmessageroleenum-1)
The role of the author of this message.
**`Memberof`**
ChatCompletionRequestMessage
#### Defined in
[src/types.ts:172](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L172)

Wyświetl plik

@ -0,0 +1,50 @@
[chatgpt](../readme.md) / [Exports](../modules.md) / [openai](../modules/openai.md) / ChatCompletionResponseMessage
# Interface: ChatCompletionResponseMessage
[openai](../modules/openai.md).ChatCompletionResponseMessage
**`Export`**
**`Interface`**
ChatCompletionResponseMessage
## Table of contents
### Properties
- [content](openai.ChatCompletionResponseMessage.md#content)
- [role](openai.ChatCompletionResponseMessage.md#role)
## Properties
### content
**content**: `string`
The contents of the message
**`Memberof`**
ChatCompletionResponseMessage
#### Defined in
[src/types.ts:210](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L210)
___
### role
**role**: [`ChatCompletionResponseMessageRoleEnum`](../modules/openai.md#chatcompletionresponsemessageroleenum-1)
The role of the author of this message.
**`Memberof`**
ChatCompletionResponseMessage
#### Defined in
[src/types.ts:204](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L204)

Wyświetl plik

@ -0,0 +1,65 @@
[chatgpt](../readme.md) / [Exports](../modules.md) / [openai](../modules/openai.md) / CreateChatCompletionDeltaResponse
# Interface: CreateChatCompletionDeltaResponse
[openai](../modules/openai.md).CreateChatCompletionDeltaResponse
## Table of contents
### Properties
- [choices](openai.CreateChatCompletionDeltaResponse.md#choices)
- [created](openai.CreateChatCompletionDeltaResponse.md#created)
- [id](openai.CreateChatCompletionDeltaResponse.md#id)
- [model](openai.CreateChatCompletionDeltaResponse.md#model)
- [object](openai.CreateChatCompletionDeltaResponse.md#object)
## Properties
### choices
**choices**: [{ `delta`: { `content?`: `string` ; `role`: [`Role`](../modules.md#role) } ; `finish_reason`: `string` ; `index`: `number` }]
#### Defined in
[src/types.ts:149](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L149)
___
### created
**created**: `number`
#### Defined in
[src/types.ts:147](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L147)
___
### id
**id**: `string`
#### Defined in
[src/types.ts:145](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L145)
___
### model
**model**: `string`
#### Defined in
[src/types.ts:148](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L148)
___
### object
**object**: ``"chat.completion.chunk"``
#### Defined in
[src/types.ts:146](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L146)

Wyświetl plik

@ -0,0 +1,218 @@
[chatgpt](../readme.md) / [Exports](../modules.md) / [openai](../modules/openai.md) / CreateChatCompletionRequest
# Interface: CreateChatCompletionRequest
[openai](../modules/openai.md).CreateChatCompletionRequest
**`Export`**
**`Interface`**
CreateChatCompletionRequest
## Table of contents
### Properties
- [frequency\_penalty](openai.CreateChatCompletionRequest.md#frequency_penalty)
- [logit\_bias](openai.CreateChatCompletionRequest.md#logit_bias)
- [max\_tokens](openai.CreateChatCompletionRequest.md#max_tokens)
- [messages](openai.CreateChatCompletionRequest.md#messages)
- [model](openai.CreateChatCompletionRequest.md#model)
- [n](openai.CreateChatCompletionRequest.md#n)
- [presence\_penalty](openai.CreateChatCompletionRequest.md#presence_penalty)
- [stop](openai.CreateChatCompletionRequest.md#stop)
- [stream](openai.CreateChatCompletionRequest.md#stream)
- [temperature](openai.CreateChatCompletionRequest.md#temperature)
- [top\_p](openai.CreateChatCompletionRequest.md#top_p)
- [user](openai.CreateChatCompletionRequest.md#user)
## Properties
### frequency\_penalty
`Optional` **frequency\_penalty**: `number`
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model\'s likelihood to repeat the same line verbatim. [See more information about frequency and presence penalties.](/docs/api-reference/parameter-details)
**`Memberof`**
CreateChatCompletionRequest
#### Defined in
[src/types.ts:284](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L284)
___
### logit\_bias
`Optional` **logit\_bias**: `object`
Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
**`Memberof`**
CreateChatCompletionRequest
#### Defined in
[src/types.ts:290](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L290)
___
### max\_tokens
`Optional` **max\_tokens**: `number`
The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens).
**`Memberof`**
CreateChatCompletionRequest
#### Defined in
[src/types.ts:272](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L272)
___
### messages
**messages**: [`ChatCompletionRequestMessage`](openai.ChatCompletionRequestMessage.md)[]
The messages to generate chat completions for, in the [chat format](/docs/guides/chat/introduction).
**`Memberof`**
CreateChatCompletionRequest
#### Defined in
[src/types.ts:236](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L236)
___
### model
**model**: `string`
ID of the model to use. Currently, only `gpt-3.5-turbo` and `gpt-3.5-turbo-0301` are supported.
**`Memberof`**
CreateChatCompletionRequest
#### Defined in
[src/types.ts:230](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L230)
___
### n
`Optional` **n**: `number`
How many chat completion choices to generate for each input message.
**`Memberof`**
CreateChatCompletionRequest
#### Defined in
[src/types.ts:254](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L254)
___
### presence\_penalty
`Optional` **presence\_penalty**: `number`
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model\'s likelihood to talk about new topics. [See more information about frequency and presence penalties.](/docs/api-reference/parameter-details)
**`Memberof`**
CreateChatCompletionRequest
#### Defined in
[src/types.ts:278](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L278)
___
### stop
`Optional` **stop**: [`CreateChatCompletionRequestStop`](../modules/openai.md#createchatcompletionrequeststop)
**`Memberof`**
CreateChatCompletionRequest
#### Defined in
[src/types.ts:266](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L266)
___
### stream
`Optional` **stream**: `boolean`
If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message.
**`Memberof`**
CreateChatCompletionRequest
#### Defined in
[src/types.ts:260](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L260)
___
### temperature
`Optional` **temperature**: `number`
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both.
**`Memberof`**
CreateChatCompletionRequest
#### Defined in
[src/types.ts:242](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L242)
___
### top\_p
`Optional` **top\_p**: `number`
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both.
**`Memberof`**
CreateChatCompletionRequest
#### Defined in
[src/types.ts:248](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L248)
___
### user
`Optional` **user**: `string`
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).
**`Memberof`**
CreateChatCompletionRequest
#### Defined in
[src/types.ts:296](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L296)

Wyświetl plik

@ -0,0 +1,106 @@
[chatgpt](../readme.md) / [Exports](../modules.md) / [openai](../modules/openai.md) / CreateChatCompletionResponse
# Interface: CreateChatCompletionResponse
[openai](../modules/openai.md).CreateChatCompletionResponse
**`Export`**
**`Interface`**
CreateChatCompletionResponse
## Table of contents
### Properties
- [choices](openai.CreateChatCompletionResponse.md#choices)
- [created](openai.CreateChatCompletionResponse.md#created)
- [id](openai.CreateChatCompletionResponse.md#id)
- [model](openai.CreateChatCompletionResponse.md#model)
- [object](openai.CreateChatCompletionResponse.md#object)
- [usage](openai.CreateChatCompletionResponse.md#usage)
## Properties
### choices
**choices**: [`CreateChatCompletionResponseChoicesInner`](openai.CreateChatCompletionResponseChoicesInner.md)[]
**`Memberof`**
CreateChatCompletionResponse
#### Defined in
[src/types.ts:339](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L339)
___
### created
**created**: `number`
**`Memberof`**
CreateChatCompletionResponse
#### Defined in
[src/types.ts:327](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L327)
___
### id
**id**: `string`
**`Memberof`**
CreateChatCompletionResponse
#### Defined in
[src/types.ts:315](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L315)
___
### model
**model**: `string`
**`Memberof`**
CreateChatCompletionResponse
#### Defined in
[src/types.ts:333](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L333)
___
### object
**object**: `string`
**`Memberof`**
CreateChatCompletionResponse
#### Defined in
[src/types.ts:321](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L321)
___
### usage
`Optional` **usage**: [`CreateCompletionResponseUsage`](openai.CreateCompletionResponseUsage.md)
**`Memberof`**
CreateChatCompletionResponse
#### Defined in
[src/types.ts:345](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L345)

Wyświetl plik

@ -0,0 +1,61 @@
[chatgpt](../readme.md) / [Exports](../modules.md) / [openai](../modules/openai.md) / CreateChatCompletionResponseChoicesInner
# Interface: CreateChatCompletionResponseChoicesInner
[openai](../modules/openai.md).CreateChatCompletionResponseChoicesInner
**`Export`**
**`Interface`**
CreateChatCompletionResponseChoicesInner
## Table of contents
### Properties
- [finish\_reason](openai.CreateChatCompletionResponseChoicesInner.md#finish_reason)
- [index](openai.CreateChatCompletionResponseChoicesInner.md#index)
- [message](openai.CreateChatCompletionResponseChoicesInner.md#message)
## Properties
### finish\_reason
`Optional` **finish\_reason**: `string`
**`Memberof`**
CreateChatCompletionResponseChoicesInner
#### Defined in
[src/types.ts:370](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L370)
___
### index
`Optional` **index**: `number`
**`Memberof`**
CreateChatCompletionResponseChoicesInner
#### Defined in
[src/types.ts:358](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L358)
___
### message
`Optional` **message**: [`ChatCompletionResponseMessage`](openai.ChatCompletionResponseMessage.md)
**`Memberof`**
CreateChatCompletionResponseChoicesInner
#### Defined in
[src/types.ts:364](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L364)

Wyświetl plik

@ -0,0 +1,61 @@
[chatgpt](../readme.md) / [Exports](../modules.md) / [openai](../modules/openai.md) / CreateCompletionResponseUsage
# Interface: CreateCompletionResponseUsage
[openai](../modules/openai.md).CreateCompletionResponseUsage
**`Export`**
**`Interface`**
CreateCompletionResponseUsage
## Table of contents
### Properties
- [completion\_tokens](openai.CreateCompletionResponseUsage.md#completion_tokens)
- [prompt\_tokens](openai.CreateCompletionResponseUsage.md#prompt_tokens)
- [total\_tokens](openai.CreateCompletionResponseUsage.md#total_tokens)
## Properties
### completion\_tokens
**completion\_tokens**: `number`
**`Memberof`**
CreateCompletionResponseUsage
#### Defined in
[src/types.ts:389](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L389)
___
### prompt\_tokens
**prompt\_tokens**: `number`
**`Memberof`**
CreateCompletionResponseUsage
#### Defined in
[src/types.ts:383](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L383)
___
### total\_tokens
**total\_tokens**: `number`
**`Memberof`**
CreateCompletionResponseUsage
#### Defined in
[src/types.ts:395](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L395)

Wyświetl plik

@ -20,12 +20,10 @@
### Type Aliases
- [ChatGPTErrorType](modules.md#chatgpterrortype)
- [ContentType](modules.md#contenttype)
- [ConversationJSONBody](modules.md#conversationjsonbody)
- [ConversationResponseEvent](modules.md#conversationresponseevent)
- [FetchFn](modules.md#fetchfn)
- [GetAccessTokenFn](modules.md#getaccesstokenfn)
- [GetMessageByIdFunction](modules.md#getmessagebyidfunction)
- [Message](modules.md#message)
- [MessageActionType](modules.md#messageactiontype)
@ -40,23 +38,13 @@
## Type Aliases
### ChatGPTErrorType
Ƭ **ChatGPTErrorType**: ``"unknown"`` \| ``"chatgpt:pool:account-on-cooldown"`` \| ``"chatgpt:pool:account-not-found"`` \| ``"chatgpt:pool:no-accounts"`` \| ``"chatgpt:pool:timeout"`` \| ``"chatgpt:pool:rate-limit"`` \| ``"chatgpt:pool:unavailable"``
#### Defined in
[src/types.ts:38](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L38)
___
### ContentType
Ƭ **ContentType**: ``"text"``
#### Defined in
[src/types.ts:223](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L223)
[src/types.ts:103](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L103)
___
@ -78,7 +66,7 @@ https://chat.openapi.com/backend-api/conversation
#### Defined in
[src/types.ts:179](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L179)
[src/types.ts:59](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L59)
___
@ -96,7 +84,7 @@ ___
#### Defined in
[src/types.ts:237](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L237)
[src/types.ts:117](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L117)
___
@ -106,58 +94,7 @@ ___
#### Defined in
[src/types.ts:3](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L3)
___
### GetAccessTokenFn
Ƭ **GetAccessTokenFn**: (`{
email,
password,
sessionToken
}`: { `email`: `string` ; `password`: `string` ; `sessionToken?`: `string` }) => `string` \| `Promise`<`string`\>
#### Type declaration
▸ (`{
email,
password,
sessionToken
}`): `string` \| `Promise`<`string`\>
##### Parameters
| Name | Type |
| :------ | :------ |
| `{
email,
password,
sessionToken
}` | `Object` |
| `{
email,
password,
sessionToken
}.email` | `string` |
| `{
email,
password,
sessionToken
}.password` | `string` |
| `{
email,
password,
sessionToken
}.sessionToken?` | `string` |
##### Returns
`string` \| `Promise`<`string`\>
#### Defined in
[src/types.ts:263](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L263)
[src/types.ts:3](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L3)
___
@ -183,7 +120,7 @@ Returns a chat message from a store by it's ID (or null if not found).
#### Defined in
[src/types.ts:56](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L56)
[src/types.ts:51](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L51)
___
@ -208,7 +145,7 @@ ___
#### Defined in
[src/types.ts:243](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L243)
[src/types.ts:123](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L123)
___
@ -218,7 +155,7 @@ ___
#### Defined in
[src/types.ts:17](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L17)
[src/types.ts:17](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L17)
___
@ -235,7 +172,7 @@ ___
#### Defined in
[src/types.ts:256](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L256)
[src/types.ts:136](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L136)
___
@ -245,7 +182,7 @@ ___
#### Defined in
[src/types.ts:261](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L261)
[src/types.ts:141](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L141)
___
@ -263,7 +200,7 @@ ___
#### Defined in
[src/types.ts:206](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L206)
[src/types.ts:86](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L86)
___
@ -280,17 +217,17 @@ ___
#### Defined in
[src/types.ts:225](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L225)
[src/types.ts:105](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L105)
___
### Role
Ƭ **Role**: ``"user"`` \| ``"assistant"``
Ƭ **Role**: ``"user"`` \| ``"assistant"`` \| ``"system"``
#### Defined in
[src/types.ts:1](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L1)
[src/types.ts:1](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L1)
___
@ -312,7 +249,7 @@ ___
#### Defined in
[src/types.ts:19](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L19)
[src/types.ts:19](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L19)
___
@ -322,21 +259,20 @@ ___
#### Type declaration
| Name | Type |
| :------ | :------ |
| `abortSignal?` | `AbortSignal` |
| `conversationId?` | `string` |
| `messageId?` | `string` |
| `onProgress?` | (`partialResponse`: [`ChatMessage`](interfaces/ChatMessage.md)) => `void` |
| `parentMessageId?` | `string` |
| `promptPrefix?` | `string` |
| `promptSuffix?` | `string` |
| `stream?` | `boolean` |
| `timeoutMs?` | `number` |
| Name | Type | Description |
| :------ | :------ | :------ |
| `abortSignal?` | `AbortSignal` | - |
| `messageId?` | `string` | - |
| `name?` | `string` | The name of a user in a multi-user chat. |
| `onProgress?` | (`partialResponse`: [`ChatMessage`](interfaces/ChatMessage.md)) => `void` | - |
| `parentMessageId?` | `string` | - |
| `stream?` | `boolean` | - |
| `systemMessage?` | `string` | - |
| `timeoutMs?` | `number` | - |
#### Defined in
[src/types.ts:5](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L5)
[src/types.ts:5](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L5)
___
@ -362,4 +298,4 @@ Upserts a chat message to a store.
#### Defined in
[src/types.ts:59](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L59)
[src/types.ts:54](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L54)

Wyświetl plik

@ -4,98 +4,99 @@
## Table of contents
### Interfaces
- [ChatCompletionRequestMessage](../interfaces/openai.ChatCompletionRequestMessage.md)
- [ChatCompletionResponseMessage](../interfaces/openai.ChatCompletionResponseMessage.md)
- [CreateChatCompletionDeltaResponse](../interfaces/openai.CreateChatCompletionDeltaResponse.md)
- [CreateChatCompletionRequest](../interfaces/openai.CreateChatCompletionRequest.md)
- [CreateChatCompletionResponse](../interfaces/openai.CreateChatCompletionResponse.md)
- [CreateChatCompletionResponseChoicesInner](../interfaces/openai.CreateChatCompletionResponseChoicesInner.md)
- [CreateCompletionResponseUsage](../interfaces/openai.CreateCompletionResponseUsage.md)
### Type Aliases
- [CompletionParams](openai.md#completionparams)
- [CompletionResponse](openai.md#completionresponse)
- [CompletionResponseChoices](openai.md#completionresponsechoices)
- [CompletionResponseUsage](openai.md#completionresponseusage)
- [ReverseProxyCompletionParams](openai.md#reverseproxycompletionparams)
- [ChatCompletionRequestMessageRoleEnum](openai.md#chatcompletionrequestmessageroleenum)
- [ChatCompletionResponseMessageRoleEnum](openai.md#chatcompletionresponsemessageroleenum)
- [CreateChatCompletionRequestStop](openai.md#createchatcompletionrequeststop)
### Variables
- [ChatCompletionRequestMessageRoleEnum](openai.md#chatcompletionrequestmessageroleenum-1)
- [ChatCompletionResponseMessageRoleEnum](openai.md#chatcompletionresponsemessageroleenum-1)
## Type Aliases
### CompletionParams
### ChatCompletionRequestMessageRoleEnum
Ƭ **CompletionParams**: `Object`
#### Type declaration
| Name | Type | Description |
| :------ | :------ | :------ |
| `best_of?` | `number` | Generates `best_of` completions server-side and returns the \"best\" (the one with the highest log probability per token). Results cannot be streamed. When used with `n`, `best_of` controls the number of candidate completions and `n` specifies how many to return – `best_of` must be greater than `n`. **Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for `max_tokens` and `stop`. |
| `echo?` | `boolean` | Echo back the prompt in addition to the completion |
| `frequency_penalty?` | `number` | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model\'s likelihood to repeat the same line verbatim. [See more information about frequency and presence penalties.](/docs/api-reference/parameter-details) |
| `logit_bias?` | `Record`<`string`, `number`\> | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](/tokenizer?view=bpe) (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass `{\"50256\": -100}` to prevent the <\|endoftext\|> token from being generated. |
| `logprobs?` | `number` | Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response. The maximum value for `logprobs` is 5. If you need more than this, please contact us through our [Help center](https://help.openai.com) and describe your use case. |
| `max_tokens?` | `number` | The maximum number of tokens to generate in the completion. The token count of your prompt plus `max_tokens` cannot exceed the model\'s context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). |
| `model` | `string` | ID of the model to use. |
| `presence_penalty?` | `number` | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model\'s likelihood to talk about new topics. [See more information about frequency and presence penalties.](/docs/api-reference/parameter-details) |
| `prompt` | `string` | The string prompt to generate a completion for. |
| `stop?` | `string`[] | Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. |
| `suffix?` | `string` | The suffix that comes after a completion of inserted text. |
| `temperature?` | `number` | What [sampling temperature](https://towardsdatascience.com/how-to-sample-from-language-models-682bceb97277) to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. We generally recommend altering this or `top_p` but not both. |
| `top_p?` | `number` | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both. |
| `user?` | `string` | A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse. [Learn more](/docs/usage-policies/end-user-ids). |
Ƭ **ChatCompletionRequestMessageRoleEnum**: typeof [`ChatCompletionRequestMessageRoleEnum`](openai.md#chatcompletionrequestmessageroleenum-1)[keyof typeof [`ChatCompletionRequestMessageRoleEnum`](openai.md#chatcompletionrequestmessageroleenum-1)]
#### Defined in
[src/types.ts:62](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L62)
[src/types.ts:186](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L186)
[src/types.ts:191](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L191)
___
### CompletionResponse
### ChatCompletionResponseMessageRoleEnum
Ƭ **CompletionResponse**: `Object`
Ƭ **ChatCompletionResponseMessageRoleEnum**: typeof [`ChatCompletionResponseMessageRoleEnum`](openai.md#chatcompletionresponsemessageroleenum-1)[keyof typeof [`ChatCompletionResponseMessageRoleEnum`](openai.md#chatcompletionresponsemessageroleenum-1)]
#### Defined in
[src/types.ts:212](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L212)
[src/types.ts:217](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L217)
___
### CreateChatCompletionRequestStop
Ƭ **CreateChatCompletionRequestStop**: `string`[] \| `string`
**`Export`**
#### Defined in
[src/types.ts:303](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L303)
## Variables
### ChatCompletionRequestMessageRoleEnum
`Const` **ChatCompletionRequestMessageRoleEnum**: `Object`
#### Type declaration
| Name | Type |
| :------ | :------ |
| `choices` | [`CompletionResponseChoices`](openai.md#completionresponsechoices) |
| `created` | `number` |
| `id` | `string` |
| `model` | `string` |
| `object` | `string` |
| `usage?` | [`CompletionResponseUsage`](openai.md#completionresponseusage) |
| `Assistant` | ``"assistant"`` |
| `System` | ``"system"`` |
| `User` | ``"user"`` |
#### Defined in
[src/types.ts:148](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L148)
[src/types.ts:186](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L186)
[src/types.ts:191](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L191)
___
### CompletionResponseChoices
### ChatCompletionResponseMessageRoleEnum
Ƭ **CompletionResponseChoices**: { `finish_reason?`: `string` ; `index?`: `number` ; `logprobs?`: { `text_offset?`: `number`[] ; `token_logprobs?`: `number`[] ; `tokens?`: `string`[] ; `top_logprobs?`: `object`[] } \| ``null`` ; `text?`: `string` }[]
#### Defined in
[src/types.ts:157](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L157)
___
### CompletionResponseUsage
Ƭ **CompletionResponseUsage**: `Object`
`Const` **ChatCompletionResponseMessageRoleEnum**: `Object`
#### Type declaration
| Name | Type |
| :------ | :------ |
| `completion_tokens` | `number` |
| `prompt_tokens` | `number` |
| `total_tokens` | `number` |
| `Assistant` | ``"assistant"`` |
| `System` | ``"system"`` |
| `User` | ``"user"`` |
#### Defined in
[src/types.ts:169](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L169)
[src/types.ts:212](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L212)
___
### ReverseProxyCompletionParams
Ƭ **ReverseProxyCompletionParams**: [`CompletionParams`](openai.md#completionparams) & { `paid?`: `boolean` }
#### Defined in
[src/types.ts:144](https://github.com/transitive-bullshit/chatgpt-api/blob/607fccf/src/types.ts#L144)
[src/types.ts:217](https://github.com/transitive-bullshit/chatgpt-api/blob/1e4ddd6/src/types.ts#L217)

Wyświetl plik

@ -2,7 +2,7 @@ chatgpt / [Exports](modules.md)
# ChatGPT API <!-- omit in toc -->
> Node.js client for the unofficial [ChatGPT](https://openai.com/blog/chatgpt/) API.
> Node.js client for the official [ChatGPT](https://openai.com/blog/chatgpt/) API.
[![NPM](https://img.shields.io/npm/v/chatgpt.svg)](https://www.npmjs.com/package/chatgpt) [![Build Status](https://github.com/transitive-bullshit/chatgpt-api/actions/workflows/test.yml/badge.svg)](https://github.com/transitive-bullshit/chatgpt-api/actions/workflows/test.yml) [![MIT License](https://img.shields.io/badge/license-MIT-blue)](https://github.com/transitive-bullshit/chatgpt-api/blob/main/license) [![Prettier Code Formatting](https://img.shields.io/badge/code_style-prettier-brightgreen.svg)](https://prettier.io)
@ -33,15 +33,41 @@ This package is a Node.js wrapper around [ChatGPT](https://openai.com/blog/chatg
## Updates
<details open>
<summary><strong>March 1, 2023</strong></summary>
<br/>
The [official OpenAI chat completions API](https://platform.openai.com/docs/guides/chat) has been released, and it is now the default for this package! 🔥
| Method | Free? | Robust? | Quality? |
| --------------------------- | ------ | -------- | ----------------------- |
| `ChatGPTAPI` | ❌ No | ✅ Yes | ✅️ Real ChatGPT models |
| `ChatGPTUnofficialProxyAPI` | ✅ Yes | ☑️ Maybe | ✅ Real ChatGPT |
**Note**: We strongly recommend using `ChatGPTAPI` since it uses the officially supported API from OpenAI. We may remove support for `ChatGPTUnofficialProxyAPI` in a future release.
1. `ChatGPTAPI` - Uses the `gpt-3.5-turbo-0301` model with the official OpenAI chat completions API (official, robust approach, but it's not free)
2. `ChatGPTUnofficialProxyAPI` - Uses an unofficial proxy server to access ChatGPT's backend API in a way that circumvents Cloudflare (uses the real ChatGPT and is pretty lightweight, but relies on a third-party server and is rate-limited)
</details>
<details>
<summary><strong>Previous Updates</strong></summary>
<br/>
<details>
<summary><strong>Feb 19, 2023</strong></summary>
<br/>
We now provide three ways of accessing the unofficial ChatGPT API, all of which have tradeoffs:
| Method | Free? | Robust? | Quality? |
| --------------------------- | ------ | -------- | ----------------- |
| `ChatGPTAPI` | ❌ No | ✅ Yes | ☑️ Mimics ChatGPT |
| `ChatGPTUnofficialProxyAPI` | ✅ Yes | ☑️ Maybe | ✅ Real ChatGPT |
| `ChatGPAPIBrowser` (v3) | ✅ Yes | ❌ No | ✅ Real ChatGPT |
| `ChatGPTAPIBrowser` (v3) | ✅ Yes | ❌ No | ✅ Real ChatGPT |
**Note**: I recommend that you use either `ChatGPTAPI` or `ChatGPTUnofficialProxyAPI`.
@ -52,13 +78,10 @@ We now provide three ways of accessing the unofficial ChatGPT API, all of which
</details>
<details>
<summary><strong>Previous Updates</strong></summary>
<summary><strong>Feb 5, 2023</strong></summary>
<br/>
<details>
<summary><strong>Feb 5, 2023</strong></summary>
OpenAI has disabled the leaked chat model we were previously using, so we're now defaulting to `text-davinci-003`, which is not free.
We've found several other hidden, fine-tuned chat models, but OpenAI keeps disabling them, so we're searching for alternative workarounds.
@ -68,6 +91,8 @@ We've found several other hidden, fine-tuned chat models, but OpenAI keeps disab
<details>
<summary><strong>Feb 1, 2023</strong></summary>
<br/>
This package no longer requires any browser hacks – **it is now using the official OpenAI completions API** with a leaked model that ChatGPT uses under the hood. 🔥
```ts
@ -146,17 +171,19 @@ Make sure you're using `node >= 18` so `fetch` is available (or `node >= 14` if
To use this module from Node.js, you need to pick between two methods:
| Method | Free? | Robust? | Quality? |
| --------------------------- | ------ | -------- | ----------------- |
| `ChatGPTAPI` | ❌ No | ✅ Yes | ☑️ Mimics ChatGPT |
| `ChatGPTUnofficialProxyAPI` | ✅ Yes | ☑️ Maybe | ✅ Real ChatGPT |
| Method | Free? | Robust? | Quality? |
| --------------------------- | ------ | -------- | ----------------------- |
| `ChatGPTAPI` | ❌ No | ✅ Yes | ✅️ Real ChatGPT models |
| `ChatGPTUnofficialProxyAPI` | ✅ Yes | ☑️ Maybe | ✅ Real ChatGPT |
1. `ChatGPTAPI` - Uses `text-davinci-003` to mimic ChatGPT via the official OpenAI completions API (most robust approach, but it's not free and doesn't use a model fine-tuned for chat). You can override the model, completion params, and prompt to fully customize your bot.
1. `ChatGPTAPI` - Uses the `gpt-3.5-turbo-0301` model with the official OpenAI chat completions API (official, robust approach, but it's not free). You can override the model, completion params, and system message to fully customize your assistant.
2. `ChatGPTUnofficialProxyAPI` - Uses an unofficial proxy server to access ChatGPT's backend API in a way that circumvents Cloudflare (uses the real ChatGPT and is pretty lightweight, but relies on a third-party server and is rate-limited)
Both approaches have very similar APIs, so it should be simple to swap between them.
**Note**: We strongly recommend using `ChatGPTAPI` since it uses the officially supported API from OpenAI. We may remove support for `ChatGPTUnofficialProxyAPI` in a future release.
### Usage - ChatGPTAPI
Sign up for an [OpenAI API key](https://platform.openai.com/overview) and store it in your environment.
@ -174,7 +201,7 @@ async function example() {
}
```
You can override the default `model` (`text-davinci-003`) and any [OpenAI completion params](https://platform.openai.com/docs/api-reference/completions/create) using `completionParams`:
You can override the default `model` (`gpt-3.5-turbo-0301`) and any [OpenAI chat completion params](https://platform.openai.com/docs/api-reference/chat/create) using `completionParams`:
```ts
const api = new ChatGPTAPI({
@ -186,7 +213,7 @@ const api = new ChatGPTAPI({
})
```
If you want to track the conversation, you'll need to pass the `parentMessageid` and `conversationid`:
If you want to track the conversation, you'll need to pass the `parentMessageId` like this:
```ts
const api = new ChatGPTAPI({ apiKey: process.env.OPENAI_API_KEY })
@ -197,14 +224,12 @@ console.log(res.text)
// send a follow-up
res = await api.sendMessage('Can you expand on that?', {
conversationId: res.conversationId,
parentMessageId: res.id
})
console.log(res.text)
// send another follow-up
res = await api.sendMessage('What were we talking about?', {
conversationId: res.conversationId,
parentMessageId: res.id
})
console.log(res.text)
@ -234,7 +259,7 @@ const response = await api.sendMessage(
)
```
If you want to see more info about what's actually being sent to [OpenAI's completions API](https://platform.openai.com/docs/api-reference/completions), set the `debug: true` option in the `ChatGPTAPI` constructor:
If you want to see more info about what's actually being sent to [OpenAI's chat completions API](https://platform.openai.com/docs/api-reference/chat/create), set the `debug: true` option in the `ChatGPTAPI` constructor:
```ts
const api = new ChatGPTAPI({
@ -243,11 +268,11 @@ const api = new ChatGPTAPI({
})
```
You'll notice that we're using a reverse-engineered `promptPrefix` and `promptSuffix`. You can customize these via the `sendMessage` options:
We default to a basic `systemMessage`. You can override this in either the `ChatGPTAPI` constructor or `sendMessage`:
```ts
const res = await api.sendMessage('what is the answer to the universe?', {
promptPrefix: `You are ChatGPT, a large language model trained by OpenAI. You answer as concisely as possible for each responseIf you are generating a list, do not have too many items.
systemMessage: `You are ChatGPT, a large language model trained by OpenAI. You answer as concisely as possible for each responseIf you are generating a list, do not have too many items.
Current date: ${new Date().toISOString()}\n\n`
})
```
@ -273,7 +298,7 @@ async function example() {
### Usage - ChatGPTUnofficialProxyAPI
The API is almost exactly the same for the `ChatGPTUnofficialProxyAPI`; you just need to provide a ChatGPT `accessToken` instead of an OpenAI API key.
The API for `ChatGPTUnofficialProxyAPI` is almost exactly the same. You just need to provide a ChatGPT `accessToken` instead of an OpenAI API key.
```ts
import { ChatGPTUnofficialProxyAPI } from 'chatgpt'
@ -294,6 +319,8 @@ See [demos/demo-reverse-proxy](./demos/demo-reverse-proxy.ts) for a full example
npx tsx demos/demo-reverse-proxy.ts
```
`ChatGPTUnofficialProxyAPI` messages also contain a `conversationid` in addition to `parentMessageId`, since the ChatGPT webapp can't reference messages across
#### Reverse Proxy
You can override the reverse proxy by passing `apiReverseProxyUrl`:
@ -307,22 +334,29 @@ const api = new ChatGPTUnofficialProxyAPI({
Known reverse proxies run by community members include:
| Reverse Proxy URL | Author | Rate Limits | Last Checked |
| ------------------------------------------------ | -------------------------------------------- | ----------- | ------------ |
| `https://chat.duti.tech/api/conversation` | [@acheong08](https://github.com/acheong08) | 50 req/min | 2/19/2023 |
| `https://gpt.pawan.krd/backend-api/conversation` | [@PawanOsman](https://github.com/PawanOsman) | ? | 2/19/2023 |
| Reverse Proxy URL | Author | Rate Limits | Last Checked |
| ------------------------------------------------ | -------------------------------------------- | ----------------- | ------------ |
| `https://chat.duti.tech/api/conversation` | [@acheong08](https://github.com/acheong08) | 120 req/min by IP | 2/19/2023 |
| `https://gpt.pawan.krd/backend-api/conversation` | [@PawanOsman](https://github.com/PawanOsman) | ? | 2/19/2023 |
Note: info on how the reverse proxies work is not being published at this time in order to prevent OpenAI from disabling access.
#### Access Token
To use `ChatGPTUnofficialProxyAPI`, you'll need a ChatGPT access token. You can either:
To use `ChatGPTUnofficialProxyAPI`, you'll need an OpenAI access token from the ChatGPT webapp. To do this, you can use any of the following methods which take an `email` and `password` and return an access token:
1. Use [acheong08/OpenAIAuth](https://github.com/acheong08/OpenAIAuth), which is a python script to login and get an access token automatically. This works with email + password accounts (e.g., it does not support accounts where you auth via Microsoft / Google).
- Node.js libs
- [ericlewis/openai-authenticator](https://github.com/ericlewis/openai-authenticator)
- [michael-dm/openai-token](https://github.com/michael-dm/openai-token)
- [allanoricil/chat-gpt-authenticator](https://github.com/AllanOricil/chat-gpt-authenticator)
- Python libs
- [acheong08/OpenAIAuth](https://github.com/acheong08/OpenAIAuth)
2. You can manually get an `accessToken` by logging in to the ChatGPT webapp and then opening `https://chat.openai.com/api/auth/session`, which will return a JSON object containing your `accessToken` string.
These libraries work with email + password accounts (e.g., they do not support accounts where you auth via Microsoft / Google).
Access tokens last for ~8 hours (TODO: need to verify the exact TTL).
Alternatively, you can manually get an `accessToken` by logging in to the ChatGPT webapp and then opening `https://chat.openai.com/api/auth/session`, which will return a JSON object containing your `accessToken` string.
Access tokens last for days.
**Note**: using a reverse proxy will expose your access token to a third-party. There shouldn't be any adverse effects possible from this, but please consider the risks before using this method.
@ -404,7 +438,7 @@ All of these awesome projects are built using the `chatgpt` package. 🤯
- [Discord Bot #6 (Shakespeare bot)](https://gist.github.com/TheBrokenRail/4b37e7c44e8f721d8bd845050d034c16)
- [WeChat Bot #1](https://github.com/AutumnWhj/ChatGPT-wechat-bot)
- [WeChat Bot #2](https://github.com/fuergaosi233/wechat-chatgpt)
- [WeChat Bot #3](https://github.com/wangrongding/wechat-bot)
- [WeChat Bot #3](https://github.com/wangrongding/wechat-bot) (
- [WeChat Bot #4](https://github.com/darknightlab/wechat-bot)
- [WeChat Bot #5](https://github.com/sunshanpeng/wechaty-chatgpt)
- [QQ Bot (plugin for Yunzai-bot)](https://github.com/ikechan8370/chatgpt-plugin)
@ -416,11 +450,12 @@ All of these awesome projects are built using the `chatgpt` package. 🤯
- [Flutter ChatGPT API](https://github.com/coskuncay/flutter_chatgpt_api)
- [Carik Bot](https://github.com/luridarmawan/Carik)
- [Github Action for reviewing PRs](https://github.com/kxxt/chatgpt-action/)
- [WhatsApp Bot #1](https://github.com/pascalroget/whatsgpt) (multi-user support)
- [WhatsApp Bot #1](https://github.com/askrella/whatsapp-chatgpt) (DALL-E + Whisper support 💪)
- [WhatsApp Bot #2](https://github.com/amosayomide05/chatgpt-whatsapp-bot)
- [WhatsApp Bot #3](https://github.com/navopw/whatsapp-chatgpt)
- [WhatsApp Bot #3](https://github.com/pascalroget/whatsgpt) (multi-user support)
- [WhatsApp Bot #4](https://github.com/noelzappy/chatgpt-whatsapp) (schedule periodic messages)
- [WhatsApp Bot #5](https://github.com/hujanais/bs-chat-gpt3-api) (RaspberryPi + ngrok + Twilio)
- [WhatsApp Bot #6](https://github.com/dannysantino/whatsgpt) (Session and chat history storage with MongoStore)
- [Matrix Bot](https://github.com/matrixgpt/matrix-chatgpt-bot)
- [Rental Cover Letter Generator](https://sharehouse.app/ai)
- [Assistant CLI](https://github.com/diciaup/assistant-cli)
@ -446,6 +481,10 @@ All of these awesome projects are built using the `chatgpt` package. 🤯
- [Clippy the Saleforce chatbot](https://github.com/sebas00/chatgptclippy) ClippyJS joke bot
- [ai-assistant](https://github.com/youking-lib/ai-assistant) Chat assistant
- [Feishu Bot](https://github.com/linjungz/feishu-chatgpt-bot)
- [DomainGPT: Discover available domain names](https://github.com/billylo1/DomainGPT)
- [AI Poem Generator](https://aipoemgenerator.com/)
- [Next.js ChatGPT With Firebase](https://github.com/youngle316/chatgpt)
- [ai-commit – GPT-3 Commit Message Generator](https://github.com/insulineru/ai-commit)
If you create a cool integration, feel free to open a PR and add it to the list.