Home Data Center Series Unlock the full potential of Lobechat: A complete guide from setup to actual use

Preface

In the previous article, I introduced the installation of the local large language model UI: Lobechat (see article:Docker series based on the open source large language model UI framework: Lobechat detailed deployment tutorial), and the third-party API: OhMyGPT, and briefly mentioned that Lobechat + OhMyGPT can create a seamless switching experience for multiple API providers.

However, since Lobechat itself provides many functions, there are relatively many settings, so it is not easy to use. In addition, there are differences between the services provided directly by API providers and the services provided by third-party API providers (such as OhMyGPT), which may cause considerable confusion for those who are not familiar with it. In addition, Lobechat itself has some tips and precautions when using it, so I think it is necessary to write a separate article about the settings and usage of Lobechat to sort it out, which can also be regarded as a detailed tutorial for Lobechat.

Prerequisite knowledge: API providers and third-party API providers

API Providers

Note: I have already introduced this part in detail in another article (see:Starting the AI journey: A detailed introduction to local large language model UI and large language model API providers), I won’t go into details here, I will just briefly state it for the sake of the completeness of the article structure.

Large language model API providers are companies that develop and provide powerful natural language processing models, providing services to developers and enterprises in the form of APIs. These models are usually based on advanced deep learning technology and can understand, generate and process human language. The APIs provided by the providers allow users to integrate these language models into various applications for tasks such as text generation, translation, conversation, data analysis, etc. Common API providers include but are not limited to:

  1. OpenAI (ChatGPT):OpenAI is one of the leading large language model providers and provides ChatGPT API, which supports text generation and conversation functions based on the GPT model. ChatGPT can provide efficient natural language processing capabilities in a variety of scenarios, and has a wide range of applications from customer support to content creation.
  2. Google (Gemini):Google provides the Gemini series of models through Google Cloud's API service. These models are known for their deep language understanding and generation capabilities, especially in multilingual processing, text analysis, and information extraction tasks.
  3. Anthropic (Claude):Anthropic's Claude series is an emerging powerhouse in the field of large language models. Its API provides intelligent and safer conversation generation capabilities. Claude is good at handling complex conversations and maintaining semantic consistency. It is widely used in AI assistants, customer support and other scenarios.

Third-party API providers

What are third-party API providers?

Third-party API providers refer to companies or platforms that do not directly develop their own large language models or AI technologies, but instead integrate multiple mainstream API providers (such as OpenAI, Google, Anthropic, etc.) and provide unified interfaces and services to help users easily access these models. These third-party providers usually simplify the management and integration process of APIs, allowing developers and enterprises to choose different AI models from a single platform without having to separately register, configure, and manage APIs from multiple providers.

Their services can include cost reduction, unified billing systems, simplified API management, and seamless integration of multiple models, enabling businesses and developers to quickly adopt AI technology in a more flexible and economical way.

How third-party API providers work

The operating principle of third-party API suppliers can be simply understood as: a platform that integrates multiple large language model API suppliers. Users only need to use the unified API address and access key provided by the platform to call multiple different models.

The core functions of the third-party API provider (taking OhMyGPT as an example, I use) are as follows:

1. Channel management: On the third-party API supplier platform, each channel corresponds to an API Key, which can be an API Key from OpenAI, Microsoft, Google, etc. One API Key can access multiple models from the same supplier. OhMyGPT will automatically select the appropriate channel to call a specific model based on the user's request.

2. Access credentials:Users only need to use the unified access credentials of the third-party API supplier to access all models integrated on the platform. There is no need to configure different API credentials for each large language model. Users only need to configure the API address and access token of the third-party API supplier to easily call the models of multiple large language model API suppliers.

3. Operation process:

• The user makes a request, specifying the name of the desired model.
• OhMyGPT matches the corresponding API channel according to the model name in the request and selects the appropriate model to call.
• After the match is successful, OhMyGPT sends a request to the actual large language model API provider to obtain the processing result.
• Finally, OhMyGPT returns the results to the user.

This architecture makes OhMyGPT a convenient third-party platform where users can access multiple model suppliers through a unified interface without having to manage the details of each API separately, greatly improving the efficiency of use and integration. This approach greatly simplifies the process of switching and calling between different models for users, avoiding the need to deal with different interface formats and authentication methods for each supplier.

Note: From a certain perspective, third-party API providers can be regarded as a kind of 'API reverse proxy', which forwards user requests to different large language model API providers and returns the results by providing a unified interface and access key, thus simplifying the multi-model switching and calling process.


There is an open source project called One API on github with 18.4k stars. It is actually a self-built API transit gateway (the project address is:https://github.com/songquanpeng/one-api), this project can provide a unified interface for various large language model API suppliers through self-built methods. The working principle of this interface is essentially the same as that of third-party API suppliers. Therefore, you can understand the working principle of third-party API suppliers through One API, as shown in the following figure:

image.png

One API provides the following core functions (you can compare them with the core functions of the third-party API providers mentioned above):

1. Unified interface :By defining a standardized interface, developers can call different AI model APIs through a set of interfaces. This reduces the complexity of integrating multiple services.

2. Key Management :Provide flexible API key management functions for different services. Through One API, users can more easily manage and configure multiple API keys to ensure the security and convenience of calls.

3. Logging and Monitoring : Provides logging and statistical analysis of each API request to help developers monitor the usage and performance of each model.

4. Openness and scalability : As an open source project, One API can be modified and extended according to specific needs to adapt to different scenarios and requirements.

However, the problem it solves is different from that of third-party API suppliers: its main purpose is to help users who have purchased services from major API suppliers to solve access problems caused by IP address restrictions (for example, suppliers are not open to the mainland): users can deploy One API on a foreign cloud host without IP restrictions as a transit gateway, and access major language model suppliers through the API of this cloud host to bypass geographical restrictions. In contrast, third-party API suppliers usually directly provide a unified API interface and access key, integrating the services of multiple large model suppliers, and solving the problem of allowing users to purchase or manage the services of each supplier on their own.

In other words, third-party API platforms mainly simplify switching and calling between multiple models, while One API focuses on solving geographical restrictions and allows users to continue using the services they have purchased.

Due to different focuses, One API, as an API transit gateway, needs to pay more attention to and resolve compatibility with various API vendors, so it usually performs better in various scenarios; third-party vendors (such as OhMyGPT) may have limited support for specific vendors' APIs in some scenarios, especially when it goes beyond simple conversation requests. For example: in non-chat applications, AI plug-ins on WordPress (such as AI Engine) need to generate or analyze complex content. Such calls have high requirements for API compatibility. In these usage scenarios, if OhMyGPT does not accurately adapt to the vendor's API format or return value processing logic, the success rate may be reduced.

Therefore, OhMyGPT is more suitable for chat application scenarios (such as Lobechat or ChatGPT Next Web). If you cannot successfully use third-party API providers such as OhMyGPT in other scenarios, and you happen to have a cloud host with a suitable IP, you can consider the One API solution.


Introduction to Third-Party API Providers

Common third-party API providers include but are not limited to:

1. OhMyGPT

Introduction:OhMyGPT is a third-party provider that focuses on providing API docking services for multiple large language models. Through OhMyGPT, users can use multiple top large language models such as OpenAI, Google Gemini, Anthropic Claude, etc. without having to manage accounts for each provider separately. One of the major advantages of OhMyGPT is that it can provide users with a more convenient interface and lower usage costs, which is suitable for developers and enterprises who want to flexibly use multiple models.

Features: Simplifies API management, supports multiple large model platforms, and has relatively low costs.

2. RapidAPI

Introduction: RapidAPI is the world's largest API market, providing thousands of APIs for users to access, including large language model APIs. Developers can quickly find and connect to AI vendors such as OpenAI and IBM Watson through the RapidAPI platform, and use a unified API key management and billing system. It provides developers with the ability to test and monitor APIs, improving the efficiency of accessing different services.

Features: Rich API variety, fast integration, unified management and billing.

3. NLP Cloud

Introduction:NLP Cloud provides APIs based on a variety of natural language processing models, supporting models such as GPT series, T5, BERT, etc. NLP Cloud focuses on providing enterprises with optimized language model solutions, especially in the fields of text generation, classification, translation, etc. It also provides users with privacy protection and custom model options, which is very suitable for enterprises that require high security and customized functions.

Features: Supports multiple NLP models, enterprise-level solutions, and privacy protection.

4. AssemblyAI

Introduction: AssemblyAI is known for its speech recognition API, but it also provides services related to large language models through its platform. Users can integrate OpenAI's GPT model through AssemblyAI, combining language processing with speech recognition, and apply it to multimodal scenarios such as speech-to-text and smart assistants. It provides developers with a fast and stable access experience.

Features: Multimodal support, combination of voice and text processing, and stable API service.

5. Spell

Introduction:Spell provides training and management services for machine learning and AI models, allowing developers to access language models such as OpenAI and Google through its platform. Spell also provides custom training and deployment options, suitable for companies or researchers who need flexible deployment and management of AI models.

Features: Customize model training, simplify deployment, and flexibly integrate multiple AI models.

6. LangchainHub

Introduction:LangchainHub is a third-party platform focusing on natural language processing and large language model integration. Through LangchainHub, users can integrate multiple large language model APIs, such as OpenAI, Anthropic, Google, etc., simplify the call to multiple language models, and provide compatibility with different tools and frameworks, helping developers and researchers to quickly build NLP applications.

Features: Extensive integration of multiple models, improved development efficiency, and multi-tool compatibility.

In this article, OhMyGPT is used as an example of a third-party API provider.

Optional: Use Lobechat as a PWA (Progressive Web App)

Introduction to PWA

Lobechat supports PWA (Progressive Web Apps), which enables it to provide a native-like experience on mobile and desktop devices. With PWA support, users can add a shortcut to Lobechat on the desktop of their work device and open and run it like a local application. PWA features include:

  1. Offline support: The PWA feature allows users to still access some functions when there is no network connection.
  2. Push Notifications:Users can receive real-time notifications (such as messages or task reminders) from Lobechat through PWA to improve interactivity.
  3. Quick installation:Users can install it with one click through the browser and experience a more convenient access method without downloading from the app store.
  4. Automatic Updates: PWA applications can automatically update to the latest version, ensuring that users always use the latest version of Lobechat.

With PWA, Lobechat can provide a consistent and convenient user experience on different devices, which is very suitable for frequent use scenarios.

PWA installation

Generally speaking, you will be prompted to install when you log in to Lobechat for the first time, or you can install it by clicking the icon on the right side of the browser address bar, as shown below:

image.png

After the installation is complete, it looks the same as a regular application. For example, after I installed it on a Mac, it showed the following:
image.png

之后就可以直接点击图标打开Lobechat了,其实就是类似一个包含了网址的快捷方式,打开后其实大多是调用默认浏览器运行的,只是它在界面上看起来像一个独立的应用。不过,为了增强用户体验,PWA会以”无地址栏、无标签页”的独立窗口模式打开,给人一种独立于浏览器的感觉,观感如下:

image.png

On a technical level, it still relies on the browser engine to run, so the installed PWA shortcut will follow the updates and configurations of the default browser.

PWA的完整支持依赖于默认浏览器的特性,如果默认浏览器支持丰富的PWA特性,那么PWA的表现和体验也会更好。尤其是对”Service Workers”、”Web App Manifest”、以及”HTTPS”的支持。因此,不同浏览器作为默认浏览器时对PWA的支持情况有所差异:

  1. Better browser support: Modern browsers such as Chrome, Edge, Firefox, and Safari all support basic PWA features, such as offline access, home screen shortcuts, push notifications, etc. On these browsers, the PWA experience is close to that of native apps.
  2. Mobile: On Android devices, Chrome and Edge support PWA installation, allowing users to add to the home screen directly from the browser. Safari on iOS also supports PWA, but its functionality is slightly limited (such as no support for push notifications and background updates).
  3. Desktop:Chrome, Edge and Firefox support the installation of PWA on the desktop, allowing users to manage them like local applications.

Therefore, in order to get the best PWA experience, it is recommended to use the latest version of Chrome, Edge and other browsers as the default browser.

Lobechat App Settings

Application Settings Introduction

Lobechat的“应用设置“界面是用于全面配置应用功能的中心,包括”通用设置、”系统助手”、”语言模型”、”语音服务”和”默认助手”等多个选项卡,帮助用户优化AI助手的行为和界面效果。

General settings: Users can configure the application's theme, display language, dialogue style and other basic settings here to customize the overall interface style and user experience.

System Assistant: This tab is used to manage and configure system-level assistant functions. Users can set the role and specific tasks of the system assistant to ensure that it can operate efficiently in different application scenarios.

Language Model: Here users can select and configure the language model to be used, including model type, model parameters, etc., to better adapt to different dialogue requirements and task complexity.

Voice Services: Provides configuration options for speech recognition and speech synthesis functions. Users can enable voice input or voice output to improve the way they interact with AI assistants.

Default Helper: Users can set their default conversation assistant here, including adjusting its personality, conversation style, and parameters (such as randomness) to suit the needs of daily communication or specific tasks.

“应用设置”界面帮助用户灵活定制Lobechat的各项功能,以适应不同的工作场景和使用偏好。

要进入”应用设置”,需要先进行登录,然后可以通过点击左上角红框中的头像:

image.png

选择”应用设置”即可进入:
image.png

Language Model

在Lobechat的设置中,”语言模型”是最重要的设置项,该选项允许用户选择并配置特定的大语言模型API供应商,用于聊天和其他AI驱动的任务:该选项定义了所使用的模型类型(例如GPT-4、Claude 3.5 Sonnet、Gemini等),并设置了API提供商、API KEY、API代理地址、和模型等参数,以根据所选模型的能力定制响应:

image.png

Generally speaking, we divide it into two usage methods.

Directly use the official service of API providers

This method is suitable for friends who live abroad, or who live in China but know science or magic and can change their IP freely (and have enough money, the official OpenAI plus package costs 20 US dollars).

Directly use the official OpenAI service:

image.png

Directly use the official Azure OpenAI service:
image.png

Directly use the official anthropic service:
image.png

Note 1: There is nothing wrong with using official services directly. As long as you have a legitimate API key purchased directly from the official website and the public IP of the device running lobechat is not a domestic IP, it will be fine.

Note 2: The reason why I raise the issue of IP is that most foreign language model API suppliers have blocked direct access from domestic IP addresses, and the only one that is not blocked is Azure Open AI, which is only available to enterprise users. The following is the email notification received on October 17:

image.png

In my previous article, I recommended this unscientific way of directly using the official OpenAI in China. I didn’t expect that man proposes, God disposes (it seems that it has always been available only in the enterprise version, but now it is open when applying for use). Now I have completely given up this idea. In the future, if there are only domestic IPs, I will just use the products of domestic large language model manufacturers.

注3:”使用客户端请求模式”这个请求因人和环境导致体验不同,大家自己尝试吧。

Use the services of third-party API providers through the API provider

This method is more suitable for friends who are in China but do not have scientific or magical means, or those who have scientific or magical means but think that using official services directly is too expensive (like me?). However, the premise of this method is that there is already a third-party API supplier that suits your needs. For example, the third-party API supplier I am currently using is OhMyGPT, which has been mentioned many times above, so it will be used as an example in the following text.

Note: For a detailed introduction to OhMyGPT, please refer to my other article:. I will not repeat it here.

So, how should OhMyGPT be used in Lobechat? The following uses OpenAI's settings interface to demonstrate:

image.png

There are 3 key settings here: API KEY, API proxy address, and model list.

1. Fill in the API KEY in your OhMyGPT account in the API KEY in the above picture:

image.png

2. Fill in the OhMyGPT line address that suits you for the API proxy address:
image.png

3、模型列表,可以直接在Lobechat OpenAI设置界面的”模型列表”里进行设置需要的模型版本即可:
image.png


如果Lobechat默认提供的模型版本没有自己需要的,而在OhMyGPT账号的”设置”项里又有的模型版本(有点尴尬,找了半天没找到,自己编一个,假设是gpt-4-32k-1230),可以直接在OpenAI的”模型列表”里创建,下图中可以看到在Lobechat没有内置支持该版本(当然没有,我编的~),所以会提供”创建并添加gpt-4-32k-1230模型”的选项:

image.png

The model version will then appear in the model list:
image.png

Note: In the language model options of Lobechat, although it appears to use the model list under the OpenAI settings, if the API proxy address has already pointed to OhMyGPT, all requests will be processed directly through OhMyGPT. At this point, as long as the model version selected in the model list is correct and OhMyGPT supports the model (not necessarily OpenAI's model version, such as gemini-1.5-pro-002 in the figure above), it will be able to identify which API vendor the request is to be sent to (for example, here you can know that the model version of gemini-1.5-pro-002 is sent to Anthropic); based on the model version, OhMyGPT can also automatically generate an API request format that the vendor can understand. Therefore, even if OpenAI seems to be selected on the Lobechat language model interface, as long as the API proxy address points to OhMyGPT, models from other API vendors can be called in this way.


Additional knowledge: Why choose to use OpenAI's settings interface to call models from other API providers?

The reason for choosing OpenAI's settings interface to do this is closely related to compatibility and popularity. The specific reasons can be summarized as follows:

1. Widespread use and standardization of OpenAI:

OpenAI's API interfaces and models are widely used in AI applications, so many developers and users are very familiar with its interface and configuration options. Using OpenAI's settings interface can maximize compatibility with existing user habits while simplifying the development and integration process.

2. Higher compatibility:

OpenAI's API design structure and calling method have become a relatively common standard. Many AI proxy services (such as OhMyGPT) or intermediary platforms can easily parse and forward OpenAI-style API requests to other suppliers, such as Ollama, Google, Anthropic, etc. This structure is simple and powerful, so when doing proxy or forwarding, OpenAI's interface and settings can better adapt to the needs of different AI suppliers.

3. Reduce development complexity:

Since OpenAI's API structure is widely accepted, application developers can adapt to multiple AI vendors through a unified API interface design (OpenAI style), without having to design different interfaces and interfaces for each vendor separately. This not only reduces the complexity of development, but also improves the flexibility of the platform, making it easier to add more models and vendors in the future.

4. Versatility and scalability:

OpenAI's settings interface is usually highly versatile and can convert requests into the API format of other vendors through a proxy mechanism. This allows applications to access more different models and vendors through the background proxy, even if they use OpenAI's settings box on the surface, thus improving scalability.

Therefore, OpenAI's interface was chosen to call models from other API vendors not only because of its popularity, but also because of its versatility and flexibility, making it easier to be compatible with multiple AI models and vendors.

System Assistant

Under the System Assistant option in the Lobechat app settings, the three key items are "Topic Naming Model", "Translation Model" and "Assistant Metadata Generation Model":

image.png

1. Topic Naming Model:

This model is used to automatically generate topic names for conversations, especially when the conversations are long or involve multiple topics. It can intelligently name the conversations based on the content of the conversation, helping users better organize and review different topics.

2. Translation Model:

The translation model is used to convert the conversation content between different languages. Users can choose different language models to translate the conversation in real time according to their needs, making cross-language communication smoother.

3. Assistant Metadata Generation Model:

This model is responsible for generating metadata related to the conversation, which is usually used for conversation management or further analysis. The metadata may include information such as conversation context, sentiment analysis, keyword extraction, etc., which helps to optimize the conversation experience and data processing.

All the models created in the language model above can be selected here as drop-down menu options:

image.png

General settings

image.png

在Lobechat应用的“通用设置”选项下,除了设置界面”语言”之外,用户还可以对Lobechat整体界面进行调整,包括主题、字体大小、主题色、中性色,另外还可以重置所有设置以及清楚所有会话消息,这部分很简单,大家看界面就知道了,我就不多说了。

Voice Services

In the settings of the Lobechat app,Voice ServicesThe options provide voice-related function configurations, mainly including the following settings:

image.png

  1. Speech Recognition Service: Configure the speech recognition service used by the application to convert the user's voice input into text. The drop-down menu has two options: OpenAI and Broswer.
  2. Automatically end speech recognition switch: Controls whether speech recognition stops automatically at the end, so as to optimize the recognition process when there is discontinuous input.
  3. OpenAI speech synthesis model: Set up OpenAI's speech synthesis model to convert text into speech output.
  4. OpenAI speech recognition model: Set up OpenAI's speech recognition model for speech-to-text conversion.

Default Helper

What the default helper options do

In the Lobechat app settings, the "Default Assistant" option automatically assigns a preset smart assistant to the user's session. When the user starts a new conversation, the system will enable this assistant by default to provide corresponding answers and interactions.


Lobechat's default assistant is usually the "casual chat" assistant that is available by default in the chat interface. When you start a new conversation, this assistant will automatically load and be ready for ordinary conversations or daily help. You can change the type of this default assistant in the app settings as needed to adapt to different chat needs or tasks:

image.png


The default helper functions include:

  1. Personalized interaction: Users can get a personalized conversation experience based on the default assistant they set. For example, you can choose an assistant that focuses on a certain field (such as programming or content creation), and the system will answer relevant questions based on the assistant's preset configuration.
  2. Simplify the use process: It eliminates the need to manually select an assistant every time you start a new conversation, improving the convenience of the user experience.
  3. Specific use scenarios: Suitable for users who focus on a certain task for a long time, such as using a specific assistant for translation, writing, or professional Q&A. By setting a default assistant, users can directly enter this mode.

You can use Lobechat's "Default Assistant" setting to specify the assistant you use most often or that is best suited for the current task, ensuring that the intelligent experience for each conversation meets your expectations and needs.

Illustration of the default assistant's related settings

image.png

image.png

image.png


在AI助手的使用中,”Prompt”(提示词)起着至关重要的作用,直接影响助手的理解和回应质量:Prompt是用户输入的一段文字或问题,用来告诉AI要完成什么任务或给出什么信息,它的作用和重要性如下:

1. Clarify the mission intent

Prompts define the user's needs, enabling AI to understand and respond to specific requests. For example, detailed prompts can help AI assistants more accurately understand the context of the question, tone, or expected output format, thereby generating more relevant responses. Without clear prompt words, AI assistants may give off-topic answers.

2. Control over generated content

By optimizing and designing prompts, users can control the content generated by the AI assistant during the conversation. A good prompt can specify the AI's answer style (such as humor, professionalism), content details (such as clear points, detailed explanations), and even tone. Especially in long conversations, prompts can help maintain content coherence.

3. Improve interaction efficiency

The design of prompts directly affects the efficiency of interaction. A clear prompt can reduce unnecessary clarifications and allow AI assistants to give more accurate and comprehensive answers, thereby reducing the number of repeated communications. Therefore, a properly designed prompt can help users complete tasks efficiently.

4. Enhance the applicability of AI applications

The flexibility of Prompt allows the AI assistant to adapt to a variety of application scenarios, such as writing assistance, code generation, data analysis, etc. Users can specify specific tasks or fields through Prompt and use the generation capabilities of the AI assistant to complete different types of work.

总的来说,Prompt在AI助手中相当于一条”指令”,为对话提供方向和细节。优秀的Prompt设计不仅提升AI助手的回应质量,还能够实现更高效、准确的交互体验,因此,合理地设计和使用Prompt是让AI助手高效发挥作用的关键。


Lobechat-“会话”界面

Lobechat's conversation interface is the core platform for users to interact with AI assistants (the main workspace at ordinary times). It supports instant conversations, topic management, and conversation recording functions to help users get real-time replies and maintain the consistency of conversations. Users can handle multiple tasks at the same time and optimize the conversation experience through custom settings (such as randomness, core sampling, etc.). It is an efficient and convenient task management tool. The interface is as follows:

image.png

You can click on different assistants in the assistant bar on the left. After selecting, you can click on the assistant avatar on the right to enter the specific settings of the assistant:
image.png

The following options are similar to the default assistant configuration options mentioned above, such as assistant information:
image.png

Role settings:
image.png

Chat Preferences"
image.png

Model setup:
image.png

Voice service:
image.png

Each assistant can set these options independently, which is very flexible.

Lobechat-“文件”

Lobechat's "File" interface is not only a tool for users to upload, store and view files, but also supports integration with self-built knowledge bases. Users can add content to the knowledge base by uploading files. The AI assistant can perform data analysis, document reading and information retrieval based on the file content, improving the intelligence and efficiency of the conversation. The interface also provides classification and search functions, allowing users to quickly find information in files or knowledge bases, further enhancing the experience of task processing and personalized conversations. The interface is as follows:

image.png

Note: This part of the content is only available in the server database version. Whether it can be used normally depends on whether the object storage parameters provided during construction are correct. However, I have not used this part of the function much, especially the knowledge base part. I will add my actual experience after using it.

Lobechat”发现”界面

“发现”界面简介

Lobechat的“发现”界面是用户探索和接入更多功能的中心,涵盖”助手、”插件”、”模型”和”模型服务商”等模块。用户可以通过该界面查找、启用不同的助手与插件,扩展AI助手的能力,满足多样化的应用场景。同时,用户还可以选择和切换不同的Model, and connect to multiple model service providers to optimize model performance and service experience according to needs. The interface also provides the latest function updates and popular resource recommendations to help users continuously improve the application effect of AI:

image.png

注:”模型”和”模型服务商”的功能和我前面在”应用设置-语言模型-模型列表”部分讲的内容有重复,不外乎就是如何新建”模型版本”之类,我就不重复介绍了,所以我这里只介绍一下”助手”和”插件”。

assistant

Lobechat的”发现”界面中的”助手”选项卡用于帮助用户浏览和选择不同的AI助手:用户可以在此查看各个助手的功能和适用场景,根据需求添加或切换,以满足特定任务或对话需求:

image.png

以想添加一个”全栈”开发方面的助手为例,可以直接在搜索框搜索”全栈”:
image.png

Then the assistant details interface will appear. Confirm and complete the addition of the assistant:
image.png

添加后的AI助手会出现在Lobechat的”会话”界面:
image.png

More types of helpers can be added in the same way.

Plugins

Lobechat的”发现”界面中的”插件”选项卡用于浏览和管理各种功能扩展:用户可以在此选择和启用不同的插件,以增强AI助手的功能,例如集成第三方工具、实现特定任务处理等。通过插件,用户能够根据需要扩展Lobechat的能力,提升其适应不同应用场景的灵活性和实用性,界面如下图:

image.png

以下是我以插件”当前时间助手”为例,演示如何添加并使用该插件,使AI助手能获取当前时间。


By default, Lobechat's AI assistant can't answer questions about the current time:

image.png

This is because many AI assistants (including some versions of ChatGPT or other language models) do not have the ability to access real-time information. Their design and deployment rely on pre-trained models, which cannot obtain real-time information such as the current time without accessing external data sources. At the same time, these assistants do not have external connections in most application scenarios, and are not connected to real-time clocks or external APIs. All answers are based on data during training or set fixed times (this is also for privacy and security, to prevent access to device time or other information on the user's system without explicit permission). Therefore, they usually prompt that they cannot provide real-time data.


image.png

image.png

然后只需要在”会话”界面的功能区选择新装的”当前时间助手”插件即可:
image.png

Then ask again about the current time, and the AI assistant can already answer:
image.png

Other extension plug-ins can be added or enabled in the same way. Just follow this idea. However, some plug-ins may require science or magic to work properly, such as plug-ins for YouTube and Google related functions. Please pay attention to this.

Afterword

写这篇教程之前,我都没想到居然写了这么多,更是发现了不少之前我没有发现的一些设置细节,果然”落笔成文”对深入学习与理解的帮助真的好大(感觉类似的话我已经说了好多遍了~)。

But it's normal to think about it. As a popular local large language model UI for Chat, Lobechat is very strong in terms of comprehensive functions, flexibility, high customization, local deployment, user experience, etc., and is especially suitable for individuals and small teams. However, for users who are not familiar with large language models or lack basic technical background, Lobechat's advanced settings (such as model parameter configuration) may be difficult to use, and there may be a certain learning cost for such users (I basically figured it out after writing this tutorial).

Therefore, I have tried to write some settings and usage details in this tutorial as detailed as possible, hoping to help more friends with less technical background. However, even so, the learning cost is still inevitable, but I just hope that through this tutorial and the previous few articles, you can avoid the detours I took before.

The content of the blog is original. Please indicate the source when reprinting! For more blog articles, you can go toSitemapUnderstand. The RSS address of the blog is:https://blog.tangwudi.com/feed, welcome to subscribe; if necessary, you can joinTelegram GroupDiscuss the problem together.
No Comments

Send Comment Edit Comment


				
|´・ω・)ノ
ヾ(≧∇≦*)ゝ
(☆ω☆)
(╯‵□′)╯︵┴─┴
 ̄﹃ ̄
(/ω\)
∠(ᐛ 」∠)_
(๑•̀ㅁ•́ฅ)
→_→
୧(๑•̀⌄•́๑)૭
٩(ˊᗜˋ*)و
(ノ°ο°)ノ
(´இ皿இ`)
⌇●﹏●⌇
(ฅ´ω`ฅ)
(╯°A°)╯︵○○○
φ( ̄∇ ̄o)
ヾ(´・ ・`。)ノ"
( ง ᵒ̌ᵒ̌)ง⁼³₌₃
(ó﹏ò。)
Σ(っ°Д °;)っ
( ,,´・ω・)ノ"(´っω・`。)
╮(╯▽╰)╭
o(*////▽////*)q
>﹏<
( ๑´•ω•) "(ㆆᴗㆆ)
😂
😀
😅
😊
🙂
🙃
😌
😍
😘
😜
😝
😏
😒
🙄
😳
😡
😔
😫
😱
😭
💩
👻
🙌
🖕
👍
👫
👬
👭
🌚
🌝
🙈
💊
😶
🙏
🍦
🍉
😣
Source: github.com/k4yt3x/flowerhd
Emoticons
Emoji
Little Dinosaur
flower!
Previous
Next
       

This site has disabled the right mouse button and various shortcut keys. The code block content can be copied directly by clicking the copy button in the upper right corner

en_US