π§ͺUse
Start analyzing reports!
Last updated
Start analyzing reports!
Last updated
Before running trs
, you must set your OpenAI API key via environment variable:
Command | Description | Arguments |
---|---|---|
URLs provided to the !command
go through the following workflow:
Retrieve URL and parse to text content
Split the full text into smaller chunks
Store chunked text and their embeddings in vector database with source URL metadata
Send full text content with specified prompt template (the command) to OpenAI and return response
Custom prompt templates can be saved to the prompts/
directory as text files with the .txt
extension. The !custom
command will look for prompts by file basename in that directory, add the URL's text content to the template, and send it to the LLM for processing.
Custom prompts must include the format string {document}
so the URL text content can be added.
Before you can use the RAG chat functionality, you first must process a URL with one of the commands above so the vector database has some context to use for your questions.
Any input that is not a !command
will be processed for RAG/QnA over the data stored in the vector database.
You currently can't ask the LLM questions outside of your context. If the answer is not available in the context, you won't get an answer.
**example
!summ
Generate summary of URL, extract MITRE ATT&CK TTPs, and generate Mermaid.js mindmap
URL
!detect
Identify threat detection opportunities within URL content
URL
!custom
Process URL content with custom prompt
prompt_name, URL