๐ŸฆŠ
trs: documentation
GitHub Repo
  • ๐Ÿง™โ€โ™‚๏ธtrs: documentation
  • Overview
    • ๐Ÿ› ๏ธInstall
    • ๐ŸงชUse
    • ๐Ÿ–ผ๏ธScreenshots
    • ๐Ÿ—บ๏ธDiagrams
Powered by GitBook
On this page
  • Start the application
  • Commands
  • Workflow
  • Custom Prompts
  • Retrieval-Augmented-Generation
  1. Overview

Use

Start analyzing reports!

Before running trs, you must set your OpenAI API key via environment variable:

export OPENAI_API_KEY="sk-...."

Start the application

$ python main.py --chat
2023-10-14 16:24:00.645 | INFO     | trs.vectordb:get_or_create_collection:36 - Using collection: trs
2023-10-14 16:24:00.649 | SUCCESS  | trs.vectordb:__init__:33 - Loaded database

commands:
* !summ <url> - summarize a threat report
* !detect <url> - identify detections in threat report
* !custom <name> <url> - run custom prompt against URL
* !exit|!quit - exit application
ready to chat!

๐Ÿ’€ >> _

Commands

Command
Description
Arguments

!summ

Generate summary of URL, extract MITRE ATT&CK TTPs, and generate Mermaid.js mindmap

URL

!detect

Identify threat detection opportunities within URL content

URL

!custom

Process URL content with custom prompt

prompt_name, URL

Workflow

URLs provided to the !command go through the following workflow:

  1. Retrieve URL and parse to text content

  2. Split the full text into smaller chunks

  3. Store chunked text and their embeddings in vector database with source URL metadata

  4. Send full text content with specified prompt template (the command) to OpenAI and return response

Custom Prompts

Custom prompt templates can be saved to the prompts/ directory as text files with the .txt extension. The !custom command will look for prompts by file basename in that directory, add the URL's text content to the template, and send it to the LLM for processing.

Custom prompts must include the format string {document} so the URL text content can be added.

Retrieval-Augmented-Generation

Before you can use the RAG chat functionality, you first must process a URL with one of the commands above so the vector database has some context to use for your questions.

Any input that is not a !command will be processed for RAG/QnA over the data stored in the vector database.

You currently can't ask the LLM questions outside of your context. If the answer is not available in the context, you won't get an answer.

**example

๐Ÿ’€ >> Summarize the LemurLoot malware functionality        
2023-10-14 14:51:51.140 | INFO     | trs.vectordb:query:84 - Querying database for: Summarize the LemurLoot malware functionality
2023-10-14 14:51:51.840 | INFO     | trs.vectordb:query:90 - Found 3 results
2023-10-14 14:51:51.841 | INFO     | trs.llm:qna:98 - sending qna prompt
2023-10-14 14:51:51.841 | INFO     | trs.llm:_call_openai:41 - Calling OpenAI
2023-10-14 14:51:51.854 | INFO     | trs.llm:_call_openai:59 - token count: 2443
๐Ÿค– >>
The LemurLoot malware has several functionalities. It uses the header field โ€œX-siLock-Step1โ€™ to receive commands from the operator, with two well-defined commands: -1 and -2.  
Command โ€œ-1โ€ retrieves Azure system settings from MOVEit Transfer and performs SQL queries to retrieve files. Command โ€œ-2โ€ deletes a user account with the LoginName and        
RealName set to "Health Check Service". If any other values are received, the web shell opens a specified file and retrieves it. If no values are specified, it creates the     
โ€œHealth Check Serviceโ€ admin user and creates an active session.

Last updated 1 year ago

๐Ÿงช