Testing Mulitple Local LLM with LM Studio

工作流概述

这是一个包含21个节点的复杂工作流,主要用于自动化处理各种任务。

工作流源代码

下载
{
  "id": "WulUYgcXvako9hBy",
  "meta": {
    "instanceId": "d6b86682c7e02b79169c1a61ad0484dcda5bc8b0ea70f1a95dac239c2abfd057",
    "templateCredsSetupCompleted": true
  },
  "name": "Testing Mulitple Local LLM with LM Studio",
  "tags": [
    {
      "id": "RkTiZTdbLvr6uzSg",
      "name": "Training",
      "createdAt": "2024-06-18T16:09:35.806Z",
      "updatedAt": "2024-06-18T16:09:35.806Z"
    },
    {
      "id": "W3xdiSeIujD7XgBA",
      "name": "Template",
      "createdAt": "2024-06-18T22:15:34.874Z",
      "updatedAt": "2024-06-18T22:15:34.874Z"
    }
  ],
  "nodes": [
    {
      "id": "08c457ef-5c1f-46d8-a53e-f492b11c83f9",
      "name": "Sticky Note",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        1600,
        420
      ],
      "parameters": {
        "color": 6,
        "width": 478.38709677419376,
        "height": 347.82258064516134,
        "content": "## 🧠Text Analysis
### Readability Score Ranges:
When testing model responses, readability scores can range across different levels. Here’s a breakdown:

- **90–100**: Very easy to read (5th grade or below)
- **80–89**: Easy to read (6th grade)
- **70–79**: Fairly easy to read (7th grade)
- **60–69**: Standard (8th to 9th grade)
- **50–59**: Fairly difficult (10th to 12th grade)
- **30–49**: Difficult (College)
- **0–29**: Very difficult (College graduate)
- **Below 0**: Extremely difficult (Post-graduate level)
"
      },
      "typeVersion": 1
    },
    {
      "id": "7801734c-5eb9-4abd-b234-e406462931f7",
      "name": "Get Models",
      "type": "n8n-nodes-base.httpRequest",
      "onError": "continueErrorOutput",
      "position": [
        20,
        180
      ],
      "parameters": {
        "url": "http://192.168.1.179:1234/v1/models",
        "options": {
          "timeout": 10000,
          "allowUnauthorizedCerts": false
        }
      },
      "typeVersion": 4.2
    },
    {
      "id": "5ee93d9a-ad2e-4ea9-838e-2c12a168eae6",
      "name": "Sticky Note1",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -140,
        -100
      ],
      "parameters": {
        "width": 377.6129032258063,
        "height": 264.22580645161304,
        "content": "## ⚙️ 2. Update Local IP
Update the **'Base URL'** `http://192.168.1.1:1234/v1/models` in the workflow to match the IP of your LM Studio server. (Running LM Server)[https://lmstudio.ai/docs/basics/server]

This node will query the LM Studio server to retrieve a list of all loaded model IDs at the time of the query. If you change or add models to LM Studio, you’ll need to rerun this node to get an updated list of active LLMs.
"
      },
      "typeVersion": 1
    },
    {
      "id": "f2b6a6ed-0ef1-4f2c-8350-9abd59d08e61",
      "name": "When chat message received",
      "type": "@n8n/n8n-nodes-langchain.chatTrigger",
      "position": [
        -300,
        180
      ],
      "webhookId": "39c3c6d5-ea06-4faa-b0e3-4e77a05b0297",
      "parameters": {
        "options": {}
      },
      "typeVersion": 1.1
    },
    {
      "id": "dbaf0ad1-9027-4317-a996-33a3fcc9e258",
      "name": "Sticky Note2",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -740,
        200
      ],
      "parameters": {
        "width": 378.75806451612857,
        "height": 216.12903225806457,
        "content": "## 🛠️1. Setup - LM Studio
First, download and install [LM Studio](https://lmstudio.ai/). Identify which LLM models you want to use for testing.

Next, the selected models are loaded into the server capabilities to prepare them for testing. For a detailed guide on how to set up multiple models, refer to the [LM Studio Basics](https://lmstudio.ai/docs/basics) documentation.
"
      },
      "typeVersion": 1
    },
    {
      "id": "36770fd1-7863-4c42-a68d-8d240ae3683b",
      "name": "Sticky Note3",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        360,
        400
      ],
      "parameters": {
        "width": 570.0000000000002,
        "height": 326.0645161290325,
        "content": "## 3. 💡Update the LM Settings

From here, you can modify the following
 parameters to fine-tune model behavior:

- **Temperature**: Controls randomness. Higher values (e.g., 1.0) produce more diverse results, while lower values (e.g., 0.2) make responses more focused and deterministic.
- **Top P**: Adjusts nucleus sampling, where the model considers only a subset of probable tokens. A lower value (e.g., 0.5) narrows the response range.
- **Presence Penalty**: Penalizes new tokens based on their presence in the input, encouraging the model to generate more varied responses.
"
      },
      "typeVersion": 1
    },
    {
      "id": "6b36f094-a3bf-4ff7-9385-4f7a2c80d54f",
      "name": "Get timeDifference",
      "type": "n8n-nodes-base.dateTime",
      "position": [
        1600,
        160
      ],
      "parameters": {
        "endDate": "={{ $json.endDateTime }}",
        "options": {},
        "operation": "getTimeBetweenDates",
        "startDate": "={{ $('Capture Start Time').item.json.startDateTime }}"
      },
      "typeVersion": 2
    },
    {
      "id": "a0b8f29d-2f2f-4fcf-a54a-dff071e321e5",
      "name": "Sticky Note4",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        1900,
        -260
      ],
      "parameters": {
        "width": 304.3225806451618,
        "height": 599.7580645161281,
        "content": "## 📊4. Create Google Sheet (Optional)
1. First, create a Google Sheet with the following headers:
 - Prompt
 - Time Sent
 - Time Received
 - Total Time Spent
 - Model
 - Response
 - Readability Score
 - Average Word Length
 - Word Count
 - Sentence Count
 - Average Sentence Length
2. After creating the sheet, update the corresponding Google Sheets node in the workflow to map the data fields correctly.
"
      },
      "typeVersion": 1
    },
    {
      "id": "d376a5fb-4e07-42a3-aa0c-8ccc1b9feeb7",
      "name": "Sticky Note5",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        -760,
        -200
      ],
      "parameters": {
        "color": 5,
        "width": 359.2903225806448,
        "height": 316.9032258064518,
        "content": "## 🏗️Setup Steps
1. **Download and Install LM Studio**: Ensure LM Studio is correctly installed on your machine.
2. **Update the Base URL**: Replace the base URL with the IP address of your LLM instance. Ensure the connection is established.
3. **Configure LLM Settings**: Verify that your LLM models are properly set up and configured in LM Studio.
4. **Create a Google Sheet**: Set up a Google Sheet with the necessary headers (Prompt, Time Sent, Time Received, etc.) to track your testing results.
"
      },
      "typeVersion": 1
    },
    {
      "id": "b21cad30-573e-4adf-a1d0-f34cf9628819",
      "name": "Sticky Note6",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        560,
        -160
      ],
      "parameters": {
        "width": 615.8064516129025,
        "height": 272.241935483871,
        "content": "## 📖Prompting Multiple LLMs

When testing for specific outcomes (such as conciseness or readability), you can add a **System Prompt** in the LLM Chain to guide the models' responses.

**System Prompt Suggestion**:
- Focus on ensuring that responses are concise, clear, and easily understandable by a 5th-grade reading level. 
- This prompt will help you compare models based on how well they meet readability standards and stay on point.
 
Adjust the prompt to fit your desired testing criteria.
"
      },
      "typeVersion": 1
    },
    {
      "id": "dd5f7e7b-bc69-4b67-90e6-2077b6b93148",
      "name": "Run Model with Dunamic Inputs",
      "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
      "position": [
        1020,
        400
      ],
      "parameters": {
        "model": "={{ $node['Extract Model IDsto Run Separately'].json.id }}",
        "options": {
          "topP": 1,
          "baseURL": "http://192.168.1.179:1234/v1",
          "timeout": 250000,
          "temperature": 1,
          "presencePenalty": 0
        }
      },
      "credentials": {
        "openAiApi": {
          "id": "LBE5CXY4yeWrZCsy",
          "name": "OpenAi account"
        }
      },
      "typeVersion": 1
    },
    {
      "id": "a0ee6c9a-cf76-4633-9c43-a7dc10a1f73e",
      "name": "Analyze LLM Response Metrics",
      "type": "n8n-nodes-base.code",
      "position": [
        2000,
        160
      ],
      "parameters": {
        "jsCode": "// Get the input data from n8n
const inputData = items.map(item => item.json);

// Function to count words in a string
function countWords(text) {
 return text.trim().split(/\s+/).length;
}

// Function to count sentences in a string
function countSentences(text) {
 const sentences = text.match(/[^.!?]+[.!?]+/g) || [];
 return sentences.length;
}

// Function to calculate average sentence length
function averageSentenceLength(text) {
 const sentences = text.match(/[^.!?]+[.!?]+/g) || [];
 const sentenceLengths = sentences.map(sentence => sentence.trim().split(/\s+/).length);
 const totalWords = sentenceLengths.reduce((acc, val) => acc + val, 0);
 return sentenceLengths.length ? (totalWords / sentenceLengths.length) : 0;
}

// Function to calculate average word length
function averageWordLength(text) {
 const words = text.trim().split(/\s+/);
 const totalCharacters = words.reduce((acc, word) => acc + word.length, 0);
 return words.length ? (totalCharacters / words.length) : 0;
}

// Function to calculate Flesch-Kincaid Readability Score
function fleschKincaidReadability(text) {
 // Split text into sentences (approximate)
 const sentences = text.match(/[^.!?]+[.!?]*[\n]*/g) || [];
 // Split text into words
 const words = text.trim().split(/\s+/);
 // Estimate syllable count by matching vowel groups
 const syllableCount = (text.toLowerCase().match(/[aeiouy]{1,2}/g) || []).length;

 const sentenceCount = sentences.length;
 const wordCount = words.length;

 // Avoid division by zero
 if (wordCount === 0 || sentenceCount === 0) return 0;

 const averageWordsPerSentence = wordCount / sentenceCount;
 const averageSyllablesPerWord = syllableCount / wordCount;

 // Flesch-Kincaid formula
 return 206.835 - (1.015 * averageWordsPerSentence) - (84.6 * averageSyllablesPerWord);
}


// Prepare the result array for n8n output
const resultArray = [];

// Loop through the input data and analyze each LLM response
inputData.forEach(item => {
 const llmResponse = item.llm_response;

 // Perform the analyses
 const wordCount = countWords(llmResponse);
 const sentenceCount = countSentences(llmResponse);
 const avgSentenceLength = averageSentenceLength(llmResponse);
 const readabilityScore = fleschKincaidReadability(llmResponse);
 const avgWordLength = averageWordLength(llmResponse);

 // Structure the output to include original input and new calculated values
 resultArray.push({
 json: {
 llm_response: item.llm_response,
 prompt: item.prompt,
 model: item.model,
 start_time: item.start_time,
 end_time: item.end_time,
 time_diff: item.time_diff,
 word_count: wordCount,
 sentence_count: sentenceCount,
 average_sent_length: avgSentenceLength,
 readability_score: readabilityScore,
 average_word_length: avgWordLength
 }
 });
});

// Return the result array to n8n
return resultArray;
"
      },
      "typeVersion": 2
    },
    {
      "id": "adef5d92-cb7e-417e-acbb-1a5d6c26426a",
      "name": "Save Results to Google Sheets",
      "type": "n8n-nodes-base.googleSheets",
      "position": [
        2180,
        160
      ],
      "parameters": {
        "columns": {
          "value": {
            "Model": "={{ $('Extract Model IDsto Run Separately').item.json.id }}",
            "Prompt": "={{ $json.prompt }}",
            "Response ": "={{ $('LLM Response Analysis').item.json.text }}",
            "TIme Sent": "={{ $json.start_time }}",
            "Word_count": "={{ $json.word_count }}",
            "Sentence_count": "={{ $json.sentence_count }}",
            "Time Recieved ": "={{ $json.end_time }}",
            "Total TIme spent ": "={{ $json.time_diff }}",
            "readability_score": "={{ $json.readability_score }}",
            "Average_sent_length": "={{ $json.average_sent_length }}",
            "average_word_length": "={{ $json.average_word_length }}"
          },
          "schema": [
            {
              "id": "Prompt",
              "type": "string",
              "display": true,
              "required": false,
              "displayName": "Prompt",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "TIme Sent",
              "type": "string",
              "display": true,
              "required": false,
              "displayName": "TIme Sent",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "Time Recieved ",
              "type": "string",
              "display": true,
              "required": false,
              "displayName": "Time Recieved ",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "Total TIme spent ",
              "type": "string",
              "display": true,
              "required": false,
              "displayName": "Total TIme spent ",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "Model",
              "type": "string",
              "display": true,
              "required": false,
              "displayName": "Model",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "Response ",
              "type": "string",
              "display": true,
              "required": false,
              "displayName": "Response ",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "readability_score",
              "type": "string",
              "display": true,
              "removed": false,
              "required": false,
              "displayName": "readability_score",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "average_word_length",
              "type": "string",
              "display": true,
              "removed": false,
              "required": false,
              "displayName": "average_word_length",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "Word_count",
              "type": "string",
              "display": true,
              "removed": false,
              "required": false,
              "displayName": "Word_count",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "Sentence_count",
              "type": "string",
              "display": true,
              "removed": false,
              "required": false,
              "displayName": "Sentence_count",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            },
            {
              "id": "Average_sent_length",
              "type": "string",
              "display": true,
              "removed": false,
              "required": false,
              "displayName": "Average_sent_length",
              "defaultMatch": false,
              "canBeUsedToMatch": true
            }
          ],
          "mappingMode": "defineBelow",
          "matchingColumns": []
        },
        "options": {},
        "operation": "append",
        "sheetName": {
          "__rl": true,
          "mode": "list",
          "value": "gid=0",
          "cachedResultUrl": "https://docs.google.com/spreadsheets/d/1GdoTjKOrhWOqSZb-AoLNlXgRGBdXKSbXpy-EsZaPGvg/edit#gid=0",
          "cachedResultName": "Sheet1"
        },
        "documentId": {
          "__rl": true,
          "mode": "list",
          "value": "1GdoTjKOrhWOqSZb-AoLNlXgRGBdXKSbXpy-EsZaPGvg",
          "cachedResultUrl": "https://docs.google.com/spreadsheets/d/1GdoTjKOrhWOqSZb-AoLNlXgRGBdXKSbXpy-EsZaPGvg/edit?usp=drivesdk",
          "cachedResultName": "Teacking LLM Success"
        }
      },
      "credentials": {
        "googleSheetsOAuth2Api": {
          "id": "DMnEU30APvssJZwc",
          "name": "Google Sheets account"
        }
      },
      "typeVersion": 4.5
    },
    {
      "id": "2e147670-67af-4dde-8ba8-90b685238599",
      "name": "Capture End Time",
      "type": "n8n-nodes-base.dateTime",
      "position": [
        1380,
        160
      ],
      "parameters": {
        "options": {},
        "outputFieldName": "endDateTime"
      },
      "typeVersion": 2
    },
    {
      "id": "5a8d3334-b7f8-4f14-8026-055db795bb1f",
      "name": "Capture Start Time",
      "type": "n8n-nodes-base.dateTime",
      "position": [
        520,
        160
      ],
      "parameters": {
        "options": {},
        "outputFieldName": "startDateTime"
      },
      "typeVersion": 2
    },
    {
      "id": "c42d1748-a10d-4792-8526-5ea1c542eeec",
      "name": "Prepare Data for Analysis",
      "type": "n8n-nodes-base.set",
      "position": [
        1800,
        160
      ],
      "parameters": {
        "options": {},
        "assignments": {
          "assignments": [
            {
              "id": "920ffdcc-2ae1-4ccb-bc54-049d9d84bd42",
              "name": "llm_response",
              "type": "string",
              "value": "={{ $('LLM Response Analysis').item.json.text }}"
            },
            {
              "id": "c3e70e1b-055c-4a91-aeb0-3d00d41af86d",
              "name": "prompt",
              "type": "string",
              "value": "={{ $('When chat message received').item.json.chatInput }}"
            },
            {
              "id": "cfa45a85-7e60-4a09-b1ed-f9ad51161254",
              "name": "model",
              "type": "string",
              "value": "={{ $('Extract Model IDsto Run Separately').item.json.id }}"
            },
            {
              "id": "a49758c8-4828-41d9-b1d8-4e64dc06920b",
              "name": "start_time",
              "type": "string",
              "value": "={{ $('Capture Start Time').item.json.startDateTime }}"
            },
            {
              "id": "6206be8f-f088-4c4d-8a84-96295937afe2",
              "name": "end_time",
              "type": "string",
              "value": "={{ $('Capture End Time').item.json.endDateTime }}"
            },
            {
              "id": "421b52f9-6184-4bfa-b36a-571e1ea40ce4",
              "name": "time_diff",
              "type": "string",
              "value": "={{ $json.timeDifference.days }}"
            }
          ]
        }
      },
      "typeVersion": 3.4
    },
    {
      "id": "04679ba8-f13c-4453-99ac-970095bffc20",
      "name": "Extract Model IDsto Run Separately",
      "type": "n8n-nodes-base.splitOut",
      "position": [
        300,
        160
      ],
      "parameters": {
        "options": {},
        "fieldToSplitOut": "data"
      },
      "typeVersion": 1
    },
    {
      "id": "97cdd050-5538-47e1-a67a-ea6e90e89b19",
      "name": "Sticky Note7",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        2240,
        -160
      ],
      "parameters": {
        "width": 330.4677419354838,
        "height": 182.9032258064516,
        "content": "### Optional
You can just delete the google sheet node, and review the results by hand. 

Utilizing the google sheet, allows you to provide mulitple prompts and review the analysis against all of those runs."
      },
      "typeVersion": 1
    },
    {
      "id": "5a1558ec-54e8-4860-b3db-edcb47c52413",
      "name": "Add System Prompt",
      "type": "n8n-nodes-base.set",
      "position": [
        740,
        160
      ],
      "parameters": {
        "options": {},
        "assignments": {
          "assignments": [
            {
              "id": "fd48436f-8242-4c01-a7c3-246d28a8639f",
              "name": "system_prompt",
              "type": "string",
              "value": "Ensure that messages are concise and to the point readable by a 5th grader."
            }
          ]
        },
        "includeOtherFields": true
      },
      "typeVersion": 3.4
    },
    {
      "id": "74df223b-17ab-4189-a171-78224522e1c7",
      "name": "LLM Response Analysis",
      "type": "@n8n/n8n-nodes-langchain.chainLlm",
      "position": [
        1000,
        160
      ],
      "parameters": {
        "text": "={{ $('When chat message received').item.json.chatInput }}",
        "messages": {
          "messageValues": [
            {
              "message": "={{ $json.system_prompt }}"
            }
          ]
        },
        "promptType": "define"
      },
      "typeVersion": 1.4
    },
    {
      "id": "65d8b0d3-7285-4c64-8ca5-4346e68ec075",
      "name": "Sticky Note8",
      "type": "n8n-nodes-base.stickyNote",
      "position": [
        380,
        780
      ],
      "parameters": {
        "color": 3,
        "width": 570.0000000000002,
        "height": 182.91935483870984,
        "content": "## 🚀Pro Tip 

If you are getting strange results, ensure that you are Deleting the previous chat (next to the Chat Button) to ensure you aren't bleeding responses into the next chat. "
      },
      "typeVersion": 1
    }
  ],
  "active": false,
  "pinData": {},
  "settings": {
    "timezone": "America/Denver",
    "callerPolicy": "workflowsFromSameOwner",
    "executionOrder": "v1",
    "saveManualExecutions": true
  },
  "versionId": "a80bee71-8e21-40ff-8803-42d38f316bfb",
  "connections": {
    "Get Models": {
      "main": [
        [
          {
            "node": "Extract Model IDsto Run Separately",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Capture End Time": {
      "main": [
        [
          {
            "node": "Get timeDifference",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Add System Prompt": {
      "main": [
        [
          {
            "node": "LLM Response Analysis",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Capture Start Time": {
      "main": [
        [
          {
            "node": "Add System Prompt",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Get timeDifference": {
      "main": [
        [
          {
            "node": "Prepare Data for Analysis",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "LLM Response Analysis": {
      "main": [
        [
          {
            "node": "Capture End Time",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Prepare Data for Analysis": {
      "main": [
        [
          {
            "node": "Analyze LLM Response Metrics",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "When chat message received": {
      "main": [
        [
          {
            "node": "Get Models",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Analyze LLM Response Metrics": {
      "main": [
        [
          {
            "node": "Save Results to Google Sheets",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Run Model with Dunamic Inputs": {
      "ai_languageModel": [
        [
          {
            "node": "LLM Response Analysis",
            "type": "ai_languageModel",
            "index": 0
          }
        ]
      ]
    },
    "Extract Model IDsto Run Separately": {
      "main": [
        [
          {
            "node": "Capture Start Time",
            "type": "main",
            "index": 0
          }
        ]
      ]
    }
  }
}

功能特点

  • 自动检测新邮件
  • AI智能内容分析
  • 自定义分类规则
  • 批量处理能力
  • 详细的处理日志

技术分析

节点类型及作用

  • Stickynote
  • Httprequest
  • @N8N/N8N Nodes Langchain.Chattrigger
  • Datetime
  • @N8N/N8N Nodes Langchain.Lmchatopenai

复杂度评估

配置难度:
★★★★☆
维护难度:
★★☆☆☆
扩展性:
★★★★☆

实施指南

前置条件

  • 有效的Gmail账户
  • n8n平台访问权限
  • Google API凭证
  • AI分类服务订阅

配置步骤

  1. 在n8n中导入工作流JSON文件
  2. 配置Gmail节点的认证信息
  3. 设置AI分类器的API密钥
  4. 自定义分类规则和标签映射
  5. 测试工作流执行
  6. 配置定时触发器(可选)

关键参数

参数名称 默认值 说明
maxEmails 50 单次处理的最大邮件数量
confidenceThreshold 0.8 分类置信度阈值
autoLabel true 是否自动添加标签

最佳实践

优化建议

  • 定期更新AI分类模型以提高准确性
  • 根据邮件量调整处理批次大小
  • 设置合理的分类置信度阈值
  • 定期清理过期的分类规则

安全注意事项

  • 妥善保管API密钥和认证信息
  • 限制工作流的访问权限
  • 定期审查处理日志
  • 启用双因素认证保护Gmail账户

性能优化

  • 使用增量处理减少重复工作
  • 缓存频繁访问的数据
  • 并行处理多个邮件分类任务
  • 监控系统资源使用情况

故障排除

常见问题

邮件未被正确分类

检查AI分类器的置信度阈值设置,适当降低阈值或更新训练数据。

Gmail认证失败

确认Google API凭证有效且具有正确的权限范围,重新进行OAuth授权。

调试技巧

  • 启用详细日志记录查看每个步骤的执行情况
  • 使用测试邮件验证分类逻辑
  • 检查网络连接和API服务状态
  • 逐步执行工作流定位问题节点

错误处理

工作流包含以下错误处理机制:

  • 网络超时自动重试(最多3次)
  • API错误记录和告警
  • 处理失败邮件的隔离机制
  • 异常情况下的回滚操作