# Proxy HuggingFace TGI Endpoint Proxy chat completions for a given HuggingFace TGI endpoint and return response and analysis Endpoint: POST /api/v1/proxy/tgi/{name} Version: 1 Security: ## Header parameters: - `HL-Project-Id` (string) The ID or alias for the Project to govern the request processing. Example: "internal-search-chatbot" - `X-Requester-Id` (string) The identifier for the requester to be used if MLDR is enabled - `X-LLM-Block-Unsafe` (boolean) Whether to block unsafe input and output - `X-LLM-Block-Unsafe-Input` (boolean) Whether to block unsafe input - `X-LLM-Block-Unsafe-Output` (boolean) Whether to block unsafe output - `X-LLM-Skip-Prompt-Injection-Detection` (boolean) Whether to skip prompt injection detection - `X-LLM-Block-Prompt-Injection` (boolean) Whether to block prompt injection - `X-LLM-Prompt-Injection-Scan-Type` (string) The type of prompt injection scan to use Enum: "quick", "full" - `X-LLM-Skip-Input-DOS-Detection` (boolean) Whether to skip input denial of service detection - `X-LLM-Block-Input-DOS-Detection` (boolean) Whether to block input denial of service detection - `X-LLM-Input-DOS-Detection-Threshold` (string) The threshold for input denial of service detection - `X-LLM-Skip-Input-PII-Detection` (boolean) Whether to skip input personally identifiable information detection - `X-LLM-Skip-Output-PII-Detection` (boolean) Whether to skip output personally identifiable information detection - `X-LLM-Block-Input-PII` (boolean) Whether to block input personally identifiable information detection - `X-LLM-Block-Output-PII` (boolean) Whether to block output personally identifiable information detection - `X-LLM-Redact-Input-PII` (boolean) Whether to redact input personally identifiable information - `X-LLM-Redact-Output-PII` (boolean) Whether to redact output personally identifiable information - `X-LLM-Redact-Type` (string) The type of redaction to use Enum: "entity", "strict" - `X-LLM-Entity-Type` (string) The type of entity to redact Enum: "strict", "all" - `X-LLM-Skip-Input-Code-Detection` (boolean) Whether to skip input code detection - `X-LLM-Skip-Output-Code-Detection` (boolean) Whether to skip output code detection - `X-LLM-Block-Input-Code-Detection` (boolean) Whether to block input code detection - `X-LLM-Block-Output-Code-Detection` (boolean) Whether to block output code detection - `X-LLM-Skip-Guardrail-Detection` (boolean) Whether to skip guardrail detection - `X-LLM-Block-Guardrail-Detection` (boolean) Whether to block guardrail detection - `X-LLM-Skip-Input-URL-Detection` (boolean) Whether to skip input URL detection - `X-LLM-Skip-Output-URL-Detection` (boolean) Whether to skip output URL detection ## Path parameters: - `name` (string, required) The name of the TGI endpoint defined in service settings ## Request fields (application/json): - `model` (string) The model to use for completions - `messages` (array) - `messages.role` (string) The role of the message - `messages.content` (string) The content of the message - `temperature` (number) The temperature to use for completions - `max_tokens` (number) The maximum number of tokens to generate - `top_p` (number) The top p value to use for completions - `frequency_penalty` (number) The frequency penalty to use for completions - `presence_penalty` (number) The presence penalty to use for completions ## Response 502 fields (application/json): - `detail` (string)