Named Entity Recognition with Spacy and Large Language Model

Xin Cheng
2 min readJan 31, 2024

--

With Azure OpenAI

Spacy is the Go-to NER library. With integration of Large Language Models (LLMs) into spaCy pipelines, it supports fast prototyping and prompting of turning unstructured into structured output, without training.

Environment variables

export OPENAI_API_KEY=<api key>
export AZURE_OPENAI_KEY=<api key>

llm_ner.cfg

[nlp]
lang = "en"
pipeline = ["llm"]

[components]

[components.llm]
factory = "llm"

[components.llm.task]
@llm_tasks = "spacy.NER.v3"
labels = ["PERSON", "ORGANISATION", "LOCATION"]

[components.llm.model]
@llm_models = "spacy.Azure.v1"
name =
deployment_name =
base_url =
model_type = "chat"
api_version =
config = {"temperature": 0.0 }

Test

from spacy_llm.util import assemble

nlp = assemble("llm_ner.cfg")
content = "Jack and Jill rode up the hill in Les Deux Alpes"
doc = nlp(content)
print([(ent.text, ent.label_) for ent in doc.ents])

Result

[('Jack', 'PERSON'), ('Jill', 'PERSON'), ('Les Deux Alpes', 'LOCATION')]

Langchain integration

For models not directly supported, it is suggested to use langchain support. However, as of 2024.1.30, space-llm only supports langchain.llms (e.g. only openai completion), not chat model (not openai.chat.completion).

langchain_community.llms.__all__
['AI21', 'AlephAlpha', 'AmazonAPIGateway', 'Anthropic', 'Anyscale', 'Aphrodite', 'Arcee', 'Aviary', 'AzureMLOnlineEndpoint', 'AzureOpenAI', 'Banana', 'Baseten', 'Beam', 'Bedrock', 'CTransformers', 'CTranslate2', 'CerebriumAI', 'ChatGLM', 'Clarifai', 'Cohere', 'Databricks', 'DeepInfra', 'DeepSparse', 'EdenAI', 'FakeListLLM', 'Fireworks', 'ForefrontAI', 'GigaChat', 'GPT4All', 'GooglePalm', 'GooseAI', 'GradientLLM', 'HuggingFaceEndpoint', 'HuggingFaceHub', 'HuggingFacePipeline', 'HuggingFaceTextGenInference', 'HumanInputLLM', 'KoboldApiLLM', 'LlamaCpp', 'TextGen', 'ManifestWrapper', 'Minimax', 'MlflowAIGateway', 'Modal', 'MosaicML', 'Nebula', 'NIBittensorLLM', 'NLPCloud', 'OCIModelDeploymentTGI', 'OCIModelDeploymentVLLM', 'Ollama', 'OpenAI', 'OpenAIChat', 'OpenLLM', 'OpenLM', 'PaiEasEndpoint', 'Petals', 'PipelineAI', 'Predibase', 'PredictionGuard', 'PromptLayerOpenAI', 'PromptLayerOpenAIChat', 'OpaquePrompts', 'RWKV', 'Replicate', 'SagemakerEndpoint', 'SelfHostedHuggingFaceLLM', 'SelfHostedPipeline', 'StochasticAI', 'TitanTakeoff', 'TitanTakeoffPro', 'Tongyi', 'VertexAI', 'VertexAIModelGarden', 'VLLM', 'VLLMOpenAI', 'WatsonxLLM', 'Writer', 'OctoAIEndpoint', 'Xinference', 'JavelinAIGateway', 'QianfanLLMEndpoint', 'YandexGPT', 'VolcEngineMaasLLM']

Appendix

LLM for NER

--

--

Xin Cheng
Xin Cheng

Written by Xin Cheng

Multi/Hybrid-cloud, Kubernetes, cloud-native, big data, machine learning, IoT developer/architect, 3x Azure-certified, 3x AWS-certified, 2x GCP-certified

No responses yet