7 query strategies teach you how to use Graph RAG to explore knowledge graphs

Recently, the NebulaGraph community has conducted in-depth exploration and sharing in the fields of LLM + Graph and Graph RAG. In LlamaIndex and LangChain, NebulaGraph introduces a series of knowledge graphs and graph storage tools to support orchestration, interaction between graphs and large models. Previously, NebulaGraph evangelist Gu Si, as the main contributor to this work, has introduced in detail how to build graphs, Text2Cypher, GraphRAG, GraphIndex and other methods, and demonstrated relevant examples and effects.

Recently, Wenqi Glantz, an engineer at ArisGlobal, conducted a comprehensive experiment, evaluation, review, summary and analysis of all Graph + LLM and RAG methods based on NebulaGraph and LlamaIndex, and gave profound conclusions.

This article received widespread recognition on Twitter and LinkedIn. With Wenqi's consent, we have provided a Chinese translation for everyone, hoping to provide you with more insights and references in the exploration and practice of the Graph + LLM method.

Since Wenqi Glantz's family is a die-hard fan of the Philadelphia Phillies (the Philadelphia Phillies baseball team, shown below in English only), in this article she will use a knowledge graph, specifically the graph database NebulaGraph, to query this Philadelphia-based baseball team. Major League Baseball (Major League baseball team, shown below in English only) Information about the Philadelphia Phillies.

Architectural ideas

Here, we will use the Wikipedia·Philadelphia Phillies page as one of the data sources. In addition, because Philadelphia fans recently initiated a standing ovation for our favorite player Trea Turner (a standing ovation refers to the audience or (standing ovation) event, we will also use a YouTube video commenting on this event as another data source.

Now, our architecture diagram looks like this:

(Architecture diagram provided by the author)

If you are familiar with knowledge graphs and graph database NebulaGraph, you can jump directly to the "RAG specific implementation" chapter. If you are unfamiliar with NebulaGraph, please read on.

What is Knowledge Graph (KG)

A knowledge graph is a knowledge base that uses a graph-structured data model or topology to integrate data. It is a way of representing real-world entities and their relationships with each other. Knowledge graphs are often used to implement business scenarios such as search engines, recommendation systems, and social networks.

The composition of the knowledge graph

Knowledge graphs generally have two main components:

  • Vertex/Node: The English equivalent is Vertex and Node. Whether it is a vertex or a node, it represents an entity or object in the knowledge field. Each node corresponds to a unique entity and is identified by a unique identifier. For example, in the baseball team knowledge graph in this example, the nodes may include "Philadelphia Phillies" and "Major League Baseball".
  • Edge: represents the relationship between two nodes. For example, an edge compete in (entry) might connect the node of “Philadelphia Phillies” to the node of “Major League Baseball”.

Triad

Triplets are the basic data units of knowledge graphs and consist of three parts:

  • Subject: the node described by the triplet
  • Object: the node pointed by the relationship
  • Predicate: the relationship between subject and object

In the triplet example below, "Philadelphia Phillies" is the subject, "compete in" is the predicate, and "Major League Baseball" is the object.

(Philadelphia Phillies)--[compete in]->(Major League Baseball)

Graph databases store and query complex graph data efficiently by storing triples.

What is Cypher

Cypher is a declarative graph query language powered by graph databases. With Cypher, we tell the knowledge graph what data we want, not how to get the resulting data. This makes Cypher queries more readable and maintainable. In addition, Cypher is easy to use and capable of expressing complex graph queries.

The following is a simple query example of Cypher:

%%ngql 
MATCH (p:`entity`)-[e:relationship]->(m:`entity`)
  WHERE p.`entity`.`name` == 'Philadelphia Phillies' 
RETURN p, e, m;

This query will find all entities related to the baseball team "Philadelphia Phillies".

What is NebulaGraph

NebulaGraph is one of the best graph databases on the market. It is open source, distributed, and can handle large-scale graphs containing trillions of edges and vertices with millisecond latency. Many large companies are using it extensively for various application development, including social media, recommendation systems, fraud detection, etc.

Install NebulaGraph

To implement the Philadelphia Phillies RAG, we need to have NebulaGraph installed locally. One of the easiest ways to install NebulaGraph is with Docker Desktop. Detailed installation instructions can be found in NebulaGraph's documentation.

If you don’t know NebulaGraph, it is highly recommended to familiarize yourself with the documentation.

Specific implementation of knowledge graph RAG

Gu Siwei, chief evangelist of NebulaGraph, and the LlamaIndex team have carefully written a comprehensive guide on the development of Knowledge Graph RAG. I learned a lot from this guide and I recommend you read this guide after reading this article.

Now, using what we learned from the guide, we begin a step-by-step introduction to building the Philadelphia Phillies RAG using LlamaIndex, NebulaGraph, and GPT-3.5.

The source code can be found in my GitHub repository:https://github.com/wenqiglantz/llamaindex_nebulagraph_phillies, which includes the complete JupyterNote of the project .

Implementation Step 1: Installation and Configuration

In addition to LlamaIndex, we also need to install some libraries:

  • ipython-ngql: A Python package to help you better connect to NebulaGraph from Jupyter Notebook or iPython;
  • nebula3-python: Python client to connect and manage NebulaGraph;
  • pyvis: A tool library to quickly generate visual network diagrams with minimal Python code;
  • networkx: Python library for studying graphs and networks;
  • youtube_transcript_api: Python API to get transcripts/subtitles for YouTube videos.
%pip install llama_index==0.8.33 ipython-ngql nebula3-python pyvis networkx youtube_transcript_api

We also need to set up the OpenAI API key and configure logging for the application:

import os
import logging
import sys

os.environ["OPENAI_API_KEY"] = "sk-####################"

logging.basicConfig(stream=sys.stdout, level=logging.INFO)

Implement step 2: Connect to NebulaGraph and create a new graph space

Assuming you have installed NebulaGraph locally, now we can connect to it from JupyterNote (note: do not try to connect to the local NebulaGraph from Google Colab, for some reason it won't work).

Follow the steps and code snippets below:

  • Connect to the local NebulaGraph (default account password is root, nebula)
  • Create a graph space named phillies_rag
  • Create labels, edges, and label indices in a new graph space
os.environ["GRAPHD_HOST"] = "127.0.0.1"
os.environ["NEBULA_USER"] = "root"
os.environ["NEBULA_PASSWORD"] = "nebula" 
os.environ["NEBULA_ADDRESS"] = "127.0.0.1:9669"  

%reload_ext ngql
connection_string = f"--address {os.environ['GRAPHD_HOST']} --port 9669 --user root --password {os.environ['NEBULA_PASSWORD']}"
%ngql {connection_string}

%ngql CREATE SPACE IF NOT EXISTS phillies_rag(vid_type=FIXED_STRING(256), partition_num=1, replica_factor=1);

%%ngql
USE phillies_rag;
CREATE TAG IF NOT EXISTS entity(name string);
CREATE EDGE IF NOT EXISTS relationship(relationship string);

%ngql CREATE TAG INDEX IF NOT EXISTS entity_index ON entity(name(256));

After creating a new graph space, build it again NebulaGraphStore. Refer to the code snippet below:

from llama_index.storage.storage_context import StorageContext
from llama_index.graph_stores import NebulaGraphStore

space_name = "phillies_rag"
edge_types, rel_prop_names = ["relationship"], ["relationship"]
tags = ["entity"]

graph_store = NebulaGraphStore(
    space_name=space_name,
    edge_types=edge_types,
    rel_prop_names=rel_prop_names,
    tags=tags,
)
storage_context = StorageContext.from_defaults(graph_store=graph_store)

Implementation Step 3: Load data and create KG index

It's time to load the data. Our source data comes from the Philadelphia Phillies' Wikipedia page and a YouTube video of Trea Turner receiving a standing ovation in August 2023.

To save time and cost, we first check the local storage_context to load the KG index. If the index exists, we load it. If the index does not exist (such as when accessing the application for the first time), we need to load the two source documents (the Wikipedia page and the YouTube video mentioned above), then build the KG index and persist it in the local storage_graph in the project root directory Storage of doc, index and vector.

from llama_index import (
    LLMPredictor,
    ServiceContext,
    KnowledgeGraphIndex,
)
from llama_index.graph_stores import SimpleGraphStore
from llama_index import download_loader
from llama_index.llms import OpenAI

# define LLM
llm = OpenAI(temperature=0.1, model="gpt-3.5-turbo")
service_context = ServiceContext.from_defaults(llm=llm, chunk_size=512)

from llama_index import load_index_from_storage
from llama_hub.youtube_transcript import YoutubeTranscriptReader

try:

    storage_context = StorageContext.from_defaults(persist_dir='./storage_graph', graph_store=graph_store)
    kg_index = load_index_from_storage(
        storage_context=storage_context,
        service_context=service_context,
        max_triplets_per_chunk=15,
        space_name=space_name,
        edge_types=edge_types,
        rel_prop_names=rel_prop_names,
        tags=tags,
        verbose=True,
    )
    index_loaded = True
except:
    index_loaded = False

if not index_loaded:
    
    WikipediaReader = download_loader("WikipediaReader")
    loader = WikipediaReader()
    wiki_documents = loader.load_data(pages=['Philadelphia Phillies'], auto_suggest=False)
    print(f'Loaded {len(wiki_documents)} documents')

    youtube_loader = YoutubeTranscriptReader()
    youtube_documents = youtube_loader.load_data(ytlinks=['https://www.youtube.com/watch?v=k-HTQ8T7oVw'])    
    print(f'Loaded {len(youtube_documents)} YouTube documents')

    kg_index = KnowledgeGraphIndex.from_documents(
        documents=wiki_documents + youtube_documents,
        storage_context=storage_context,
        max_triplets_per_chunk=15,
        service_context=service_context,
        space_name=space_name,
        edge_types=edge_types,
        rel_prop_names=rel_prop_names,
        tags=tags,
        include_embeddings=True,
    )
    
    kg_index.storage_context.persist(persist_dir='./storage_graph')

When building a KG index, you need to pay attention to the following points:

  • max_triplets_per_chunk: The maximum number of triples to extract per block. Set this to 15 to cover content in most (maybe not all) chunks;
  • include_embeddings: Indicates whether to include data Embedding when creating a KG index. Embedding is a vector method for representing text data into data semantics. They are often used to allow models to understand semantic similarities between different text fragments. When include_embeddings=True is set, KnowledgeGraphIndex includes these embeds in the index. include_embeddings=True is useful when you want to perform a semantic search on a knowledge graph, as Embedding can be used to find nodes and edges that are semantically similar to the query.

Implement Step 4: Explore NebulaGraph via queries

Now, let's run a simple query.

For example, here's some information about the Philadelphia Phillies:

query_engine = kg_index.as_query_engine()
response = query_engine.query("Tell me about some of the facts of Philadelphia Phillies.")
display(Markdown(f"<b>{response}</b>"))

Here’s an overview from the Philadelphia Phillies’ Wikipedia page, and it’s a pretty good summary:

Then use Cypher to query:

%%ngql 
MATCH (p:`entity`)-[e:relationship]->(m:`entity`)
  WHERE p.`entity`.`name` == 'Philadelphia Phillies' 
RETURN p, e, m;

This query will match all entities related to the Philadelphia Phillies. The query results will return a list of all entities related to the Philadelphia Phillies, their relationships to the Philadelphia Phillies, and the Philadelphia Phillies entities themselves.

Now, let's execute this Cypher query in Jupyter Notebook:

As you can see, the result returned 9 pieces of data.

Next, run the command in the ipython-ngql package, which renders the results of the NebulaGraph query in a separate HTML file; we get the following graphics. Centered around the Philadelphia Phillies node, it extends out to nine other nodes, each representing a row of data from the Cypher query results. Connecting each node to the central node are edges, representing the relationship between the two nodes. ng_draw

What’s really cool is that you can also drag nodes to manipulate the graph!

Now that we have a basic understanding of NebulaGraph, let's dig a little deeper.

Implement Step 5: 7 Ways to Graph Exploration

Next, let us query the knowledge graph using different methods based on the KG index and observe their results.

Graph exploration method 1: KG vector-based retrieval

query_engine = kg_index.as_query_engine()

This approach finds KG entities by vector similarity, obtains connected text blocks, and selectively explores relationships. It is the default query method of LlamaIndex based on index construction. It's very simple and works out of the box with no additional parameters.

Graph exploration method 2: KG keyword-based retrieval

kg_keyword_query_engine = kg_index.as_query_engine(
    # setting to false uses the raw triplets instead of adding the text from the corresponding nodes
    include_text=False,
    retriever_mode="keyword",
    response_mode="tree_summarize",
)

This query uses keywords to retrieve related KG entities, to obtain connected text blocks, and optionally explore relationships to obtain more context. The parameter retriever_mode="keyword" specifies that this search will be in the form of keywords.

  • include_text=False: The query engine only uses native triples to query, and the query does not include the text information of the corresponding node;
  • response_mode="tree_summarize": The returned result (response form) is a summary of the tree structure of the knowledge graph. The tree is built recursively, with the query as the root node and the most relevant answer as the leaf node. tree_summarize Response patterns are useful for summary tasks, such as providing a high-level summary of a topic or answering a thoughtful question. Of course, it can also generate more complex responses, such as explaining the true reason why something happened, or explaining what steps are involved in a process.

Graph exploration method 3: KG hybrid retrieval

kg_hybrid_query_engine = kg_index.as_query_engine(
    include_text=True,
    response_mode="tree_summarize",
    embedding_mode="hybrid",
    similarity_top_k=3,
    explore_global_knowledge=True,
)

By setting embedding_mode="hybrid", specify the query engine to be a hybrid method of vector-based retrieval and keyword-based retrieval, retrieve information from the knowledge graph, and perform deduplication. The KG hybrid retrieval method not only uses keywords to find related triples, it also uses vector-based retrieval to find similar triples based on semantic similarity. So, essentially, hybrid mode combines keyword search and semantic search and leverages the strengths of both approaches to improve the accuracy and relevance of search results.

  • include_text=True: Same as the fields above, used to specify whether to include the text information of the node;
  • similarity_top_k=3:Top K setting, it will retrieve the top three results with the most similar results based on Embedding. You can flexibly adjust this value according to your usage scenario;
  • explore_global_knowledge=True: Specifies whether the query engine should consider the global context of the knowledge graph to retrieve information. When explore_global_knowledge=True is set, the query engine does not limit its search to the local context (i.e., a node's immediate neighbors), but instead considers the broader global context of the knowledge graph. This can be useful when you want to retrieve information that is not directly related to the query, but is relevant within the larger context of that knowledge graph.

The main difference between keyword-based retrieval and hybrid retrieval is the way we retrieve information from the knowledge graph: keyword-based retrieval uses a keyword approach, while hybrid retrieval uses a hybrid approach that combines embedding and keywords.

Graph Exploration Method 4: Native Vector Index Retrieval

vector_index = VectorStoreIndex.from_documents(wiki_documents + youtube_documents)
vector_query_engine = vector_index.as_query_engine()

This method does not deal with knowledge graphs at all. It is based on vector index, which will first build the vector index of the document, and then build the vector query engine from the vector index.

Graph exploration method 5: Custom combined query engine (combination of KG retrieval and vector index retrieval)

from llama_index import QueryBundle
from llama_index.schema import NodeWithScore
from llama_index.retrievers import BaseRetriever, VectorIndexRetriever, KGTableRetriever
from typing import List

class CustomRetriever(BaseRetriever):
    
    def __init__(
        self,
        vector_retriever: VectorIndexRetriever,
        kg_retriever: KGTableRetriever,
        mode: str = "OR",
    ) -> None:
        """Init params."""

        self._vector_retriever = vector_retriever
        self._kg_retriever = kg_retriever
        if mode not in ("AND", "OR"):
            raise ValueError("Invalid mode.")
        self._mode = mode

    def _retrieve(self, query_bundle: QueryBundle) -> List[NodeWithScore]:
        """Retrieve nodes given query."""

        vector_nodes = self._vector_retriever.retrieve(query_bundle)
        kg_nodes = self._kg_retriever.retrieve(query_bundle)

        vector_ids = {n.node.node_id for n in vector_nodes}
        kg_ids = {n.node.node_id for n in kg_nodes}

        combined_dict = {n.node.node_id: n for n in vector_nodes}
        combined_dict.update({n.node.node_id: n for n in kg_nodes})

        if self._mode == "AND":
            retrieve_ids = vector_ids.intersection(kg_ids)
        else:
            retrieve_ids = vector_ids.union(kg_ids)

        retrieve_nodes = [combined_dict[rid] for rid in retrieve_ids]
        return retrieve_nodes


from llama_index import get_response_synthesizer
from llama_index.query_engine import RetrieverQueryEngine
from llama_index.retrievers import VectorIndexRetriever, KGTableRetriever

# create custom retriever
vector_retriever = VectorIndexRetriever(index=vector_index)
kg_retriever = KGTableRetriever(
    index=kg_index, retriever_mode="keyword", include_text=False
)
custom_retriever = CustomRetriever(vector_retriever, kg_retriever)

# create response synthesizer
response_synthesizer = get_response_synthesizer(
    service_context=service_context,
    response_mode="tree_summarize",
)

custom_query_engine = RetrieverQueryEngine(
    retriever=custom_retriever,
    response_synthesizer=response_synthesizer,
)

LlamaIndex builds a CustomRetriever. As shown above, you can see its implementation. It is used for knowledge graph search and vector search. The default mode OR ensures the union of the two search results. The result contains the results of these two search methods, and the results are deduplicated:

  • Details obtained from knowledge graph search (KGTableRetriever);
  • Details of semantic similarity search obtained from vector index search (VectorIndexRetriever).

Graph exploration method 6: KnowledgeGraphQueryEngine

So far we have explored different query engines built using KG indexes. Now, let’s take a look at another knowledge graph query engine built by LlamaIndex - KnowledgeGraphQueryEngine. Look at the following code snippet:

query_engine = KnowledgeGraphQueryEngine(
    storage_context=storage_context,
    service_context=service_context,
    llm=llm,
    verbose=True,
)

KnowledgeGraphQueryEngineIt is a query engine that allows us to query knowledge graphs using natural language. It uses LLM to generate Cypher query statements and then executes these queries on the knowledge graph. This way, we can query the knowledge graph without learning Cypher or any other query language.

KnowledgeGraphQueryEngine receives storage_context, service_context and llm, and builds a knowledge graph query engine, where < a i=4> as. NebulaGraphStorestorage_context.graph_store

Graph exploration method 7: KnowledgeGraphRAGRetriever

KnowledgeGraphRAGRetriever is one of the LlamaIndex RetrieverQueryEngine, which performs a Graph RAG query on the knowledge graph. It receives a question or task as input and performs the following steps:

  1. Use keywords to extract or embedding search for relevant entities in the knowledge graph;
  2. Get the subgraph of those entities from the knowledge graph, with a default depth of 2;
  3. Build context based on subgraphs.

A downstream task, such as LLM, can use this context to generate a feedback. Take a look at how the following code snippet builds a KnowledgeGraphRAGRetriever:

graph_rag_retriever = KnowledgeGraphRAGRetriever(
    storage_context=storage_context,
    service_context=service_context,
    llm=llm,
    verbose=True,
)

kg_rag_query_engine = RetrieverQueryEngine.from_args(
    graph_rag_retriever, service_context=service_context
)

Okay, now we have a good idea of ​​the 7 query methods. Below, we use a set of questions to test their effectiveness.

Test 7 graph queries with 3 questions

Question 1: Tell me about Bryce Harper

The figure below shows the responses to this question from 7 query methods. I marked the query language in different colors:

Here are some of my thoughts based on the results:

  • KG Vector-based searches, keyword-based searches, KnowledgeGraphQueryEngine and KnowledgeGraphRAGRetriever, all returned the topic we were querying - Bryce Harper's Key facts - only key facts, no detailed elaboration;
  • KG hybrid searches, native vector index searches, and custom combination query engines all return a wealth of information relevant to the topic, primarily because of their ability to access query embedding;
  • Native vector index retrieval returns answers faster (~3 seconds) than other KG query engines (4+ seconds). KG hybrid entity retrieval is the slowest (~10 seconds).

Question 2: How did the standing ovation Trey Turner received affect his season performance?

This question is purposely designed and comes froma YouTube video dedicated to this standing ovation incident - Philly fans Props to Trea Turner (we're using "Trey" in the question because YouTube mistakenly spelled his name as "Trey" instead of "Trea").

Take a look at the list of answers to the 7 query methods:

Here are some of my thoughts based on the results:

  • KG's vector-based search returned a perfect answer, with all the supporting facts and detailed statistics showing how Philly fans helped Trea Turner's season. And these facts (explaining why) are stored in NebulaGraph, taken from the content of YouTube videos;
  • KG's keyword-based search returned a very short answer with no supporting facts;
  • The KG hybrid search returned good answers, although detailed factual information on Turner's performance after standing ovation was lacking. Personally, I think this answer is slightly inferior to the answer returned by KG's vector-based retrieval;
  • Native vector index searches and custom composite query engines returned good answers with more detailed factual information, but were not as complete as answers returned by KG's vector-based searches. Why doesn't a custom combined query engine have a better answer than KG's vector-based retrieval? The main reason I can think of is that the Wikipedia page has no information about Turner's standing ovation event. There are only YouTube videos, and the YouTube videos are specifically about standing ovation events, and these are loaded into the knowledge graph. The knowledge graph has enough relevant content to return a solid answer. The native vector index retrieval or custom combination query engine has no more content to input for fact support;
  • KnowledgeGraphQueryEngine returned the following syntax error. The possible cause is incorrect Cypher generation, as shown in the summary screenshot below. It seems that KnowledgeGraphQueryEngine still has room to improve its Text2Cypher capabilities;

  • KnowledgeGraphRAGRetrieverReturning the most basic information about Trea Turner's standing ovation event, this answer is obviously not ideal;
  • Native vector index searches return answers faster (~5 seconds) than other KG query engines (10+ seconds), except for KG keyword-based searches (~6 seconds). The custom composite query engine is the slowest (~13 seconds).

Bottom line: KG's vector-based retrieval appears to do better than any of the other query engines mentioned above if comprehensive contextual data is loaded correctly into the knowledge graph.

Question 3: Tell me some facts about the Philadelphia Phillies’ current stadium.

Take a look at the list of answers to the 7 query methods:

Here are some of my thoughts based on the results:

  • KG's vector-based search returned a decent answer, with some historical context for the ballpark;
  • KG's keyword-based search got the answer wrong; it didn't even mention the name of the current stadium;
  • The hybrid search returned only the most basic things about the current stadium, such as name, year, and location, which makes me wonder if the Embedding implementation in the Knowledge Graph could be improved. So, I contacted Wey (Gu Siwei) of NebulaGraph, and he reported that he would optimize Embedding and support NebulaGraph’s vector search in the future. awesome!
  • The native vector search returns some facts about the current stadium, similar to the results returned by the hybrid search;
  • The custom combination query engine gives the best answers, detailed and comprehensive, backed by many statistics and facts about the course. This is the best answer of all query engines;
  • Based on the given contextual information,KnowledgeGraphQueryEngine could not find anything about the current stadium of the Philadelphia Phillies. It seems that this is another problem with natural language automatic generation Cypher;
  • Based on the given contextual information,KnowledgeGraphRAGRetriever cannot find any facts about the current stadium;
  • Native vector retrieval returns results faster (~3 seconds) than the KG query engine (6+ seconds). The custom composite query engine is the slowest (~12 seconds).

Key takeaways

Based on the experiments on the above three questions on 7 query engines, the advantages and disadvantages of the 7 query engines are compared:

Which query engine is best for you will depend on your specific use case.

  • If the knowledge pieces in your data source are fragmented and fine-grained, and you need to perform complex reasoning on your data source, such as extracting entities and their relationships in the grid, such as in fraud detection, social networks, supply Chain management, then the knowledge graph query engine is a better choice. The KG query engine is also helpful when your embedding generates spurious correlations, leading to hallucinations.
  • If you need similarity searches, such as finding all nodes that are similar to a given node, or finding all nodes that are closest to a given node in vector space, then a vector query engine may be your best choice;
  • If you need a query engine that responds quickly, then vector query engines may be a better choice, as they are generally faster than KG query engines. Even without Embedding, task fetching (subtasks running on NebulaGraph's single storage service) may be the main reason for high KG query engine latency;
  • If you need high-quality answers, then the custom combined query engine, which combines the advantages of the KG query engine and the vector query engine, is your best choice.

Summarize

In this article we explore how knowledge graphs, specifically the graph database NebulaGraph, combined LlamaIndex and GPT-3.5 to build a RAG for the Philadelphia Phillies.

Additionally, we explored seven query engines, studied their inner workings, and observed their answers to three questions. We compared the pros and cons of each query engine to better understand the use cases for which each query engine was designed.

I hope this article can inspire you. Please check the GitHub repository for related code:https://github.com/wenqiglantz/llamaindex_nebulagraph_phillies/tree/main

Happy coding!

References


Thank you for reading this article (///▽///)

If you want to try out the graph database NebulaGraph, remember to download it from GitHub, use it, (^з^)-☆ star it-> GitHub; Communicate LLM with other NebulaGraph users. Remember to come to the LLM channel (login required) and play together~

Tang Xiaoou, founder of SenseTime, passed away at the age of 55 In 2023, PHP stagnated Wi-Fi 7 will be fully available in early 2024 Debut, 5 times faster than Wi-Fi 6 Hongmeng system is about to become independent, and many universities have set up “Hongmeng classes” Zhihui Jun’s startup company refinances , the amount exceeds 600 million yuan, and the pre-money valuation is 3.5 billion yuan Quark Browser PC version starts internal testing AI code assistant is popular, and programming language rankings are all There's nothing you can do Mate 60 Pro's 5G modem and radio frequency technology are far ahead MariaDB splits SkySQL and is established as an independent company Xiaomi responds to Yu Chengdong’s “keel pivot” plagiarism statement from Huawei
{{o.name}}
{{m.name}}

Guess you like

Origin my.oschina.net/u/4169309/blog/10319574