Select a category on the left, to get your answers quickly
schema.xml
<!--VECTORS-->
<field name="embeddings" type="vector" indexed="true" stored="true" multiValued="false" required="false" />
<fieldType name="vector" class="solr.DenseVectorField" vectorDimension="384" similarityFunction="cosine"/>
$ curl -u <INDEX_USERNAME>:<INDEX_PASSWORD> https://<OPENSOLR_INDEX_HOST>solr/<OPENSOLR_INDEX_NAME>/schema/fieldtypes -H 'Content-type:application/json' -d '{
"add-field-type": {
"name": "vector",
"class": "solr.DenseVectorField",
"vectorDimension": 384,
"similarityFunction": "cosine"
}
}'
$ curl -u <INDEX_USERNAME>:<INDEX_PASSWORD> https://<OPENSOLR_INDEX_HOST>solr/<OPENSOLR_INDEX_NAME>/schema/fields -H 'Content-type:application/json' -d '{
"add-field": {
"name":"embeddings",
"type":"vector",
"indexed":true,
"stored":false, // true if you want to see the vectors for debugging
"multiValued":false,
"required":false,
"dimension":384, // adjust to your embedder size
"similarityFunction":"cosine"
}
}'
solrconfig.xml
:<!-- The default high-performance update handler -->
<updateHandler class="solr.DirectUpdateHandler2">
<updateLog>
<int name="numVersionBuckets">65536</int>
<int name="maxNumLogsToKeep">10</int>
<int name="numRecordsToKeep">10</int>
</updateLog>
.....
</updateHandler>
Vector search has quickly become a core tool for modern search platforms. With advances in language models, we can encode text into high-dimensional vectors, making it possible to find not just what you type, but what you mean. It’s like giving your search engine a sixth sense! 🕵️‍♂️
As much as we love innovation, vector search still has a few quirks:
Hybrid search bridges the gap—combining trusty keyword (lexical) search with smart vector (neural) search for results that are both sharp and relevant.
Contrary to the grapevine, Solr can absolutely do hybrid search—even if the docs are a little shy about it. If your schema mixes traditional fields with a solr.DenseVectorField
, you’re all set.
Solr’s Boolean Query Parser lets you mix and match candidate sets with flair:
q={!bool should=$lexicalQuery should=$vectorQuery}&
lexicalQuery={!type=edismax qf=text_field}term1&
vectorQuery={!knn f=vector topK=10}[0.001, -0.422, -0.284, ...]
Result: All unique hits from both searches. No duplicates, more to love! ❤️
q={!bool must=$lexicalQuery must=$vectorQuery}&
lexicalQuery={!type=edismax qf=text_field}term1&
vectorQuery={!knn f=vector topK=10}[0.001, -0.422, -0.284, ...]
Result: Only the most relevant docs—where both worlds collide. 🤝
Adjust with the filter’s cost parameter. Need more detail? Check Solr’s Query Guide 📖
Mixing lexical and vector scores isn’t just math—it’s art (with a little science):
Normalize lexical scores (0–1) and add to KNN scores. Easy math, solid baseline.
Scale lexical scores (like 0.1–1) and multiply by KNN scores.
Tip: Test with real data—let your results do the talking!
Why handcraft rules when a model can learn what works? Solr’s Learning To Rank (LTR) lets you blend scores with machine-learned finesse.
Sample Feature Set:
[
{"name": "lexicalScore", "class": "org.apache.solr.ltr.feature.SolrFeature", "params": { "q" : "{!func}scale(query(${lexicalQuery}),0,1)" }},
{"name": "vectorSimilarityScore", "class": "org.apache.solr.ltr.feature.SolrFeature", "params": { "q" : "{!func}vectorSimilarity(FLOAT32, DOT_PRODUCT, vectorField, ${queryVector})" }}
]
Train your model outside Solr, then plug it in for search that adapts and improves.
{!edismax}
in lexicalQuery
? đź§ľParameter | Inside lexicalQuery |
Why |
---|---|---|
q |
âś… YES | Required for the subquery to function |
qf , pf , bf , bq , mm , ps |
âś… YES | All edismax features must go inside |
defType |
❌ NO | Already defined by {!edismax} |
hl , spellcheck , facet , rows , start , sort |
❌ NO | These are top-level Solr request features |
Here’s how to do it right when you want all the bells and whistles (highlighting, spellcheck, deep edismax):
# TOP-LEVEL BOOLEAN QUERY COMPOSING EDISMAX AND KNN
q={!bool should=$lexicalQuery should=$vectorQuery}
# LEXICAL QUERY: ALL YOUR EDISMAX STUFF GOES HERE
&lexicalQuery={!edismax q=$qtext qf=$qf pf=$pf mm=$mm bf=$bf}
# VECTOR QUERY
&vectorQuery={!knn f=vectorField topK=10}[0.123, -0.456, ...]
# EDISMAX PARAMS
&qtext='flying machine'
&qf=title^6 description^3 text^2 uri^4
&pf=text^10
&mm=1<100% 2<75% 3<50% 6<30%
&bf=recip(ms(NOW,publish_date),3.16e-11,1,1)
# NON-QUERY STUFF
&hl=true
&hl.fl=text
&hl.q=$lexicalQuery
&spellcheck=true
&spellcheck.q=$qtext
&rows=20
&start=0
&sort=score desc
Hybrid search gives you the sharp accuracy of keywords and the deep smarts of vectors—all in one system. With Solr, you can have classic reliability and modern magic. 🍦✨
“Why choose between classic and cutting-edge, when you can have both? Double-scoop your search!”
Happy hybrid searching! 🥳
The Opensolr AI-Hits API, is free to use as part of your Opensolr Account.
The Opensolr AI-Hints LLM will generate a summary of the context, either coming form your Opensolr Web Crawler Index, or a manually entered context.
A number of other instructions can be passed on to this API, for NER, and other capabilities. It is in Beta at this point, but will get better with time.
Example: https://api.opensolr.com/solr_manager/api/ai_summary?email=PLEASE_LOG_IN& api_key=PLEASE_LOG_IN&index_name=my_crawler_solr_index&instruction=Answer%20The%20Query&query=Who%20is%20Donald%20Trump?
embed
The embed
endpoint allows you to generate vector embeddings for any arbitrary text payload (up to 50,000 characters) and store those embeddings in your specified Opensolr index. This is ideal for embedding dynamic or ad-hoc content, without having to pre-index data in Solr first.
https://api.opensolr.com/solr_manager/api/embed
Supports only POST requests.
Parameter | Type | Required | Description |
---|---|---|---|
string | Yes | Your Opensolr registration email address. | |
api_key | string | Yes | Your API key from the Opensolr dashboard. |
index_name | string | Yes | Name of your Opensolr index/core to use. |
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
payload | string | Yes | – | The raw text string to embed. Maximum: 50,000 characters. |
payload
can be any UTF-8 text (e.g., a document, user input, generated content, etc).payload
is missing or less than 2 characters, the API returns a 404 error with a JSON error response.index_name
to indicate where the embedding should be stored (requires the appropriate field in your Solr schema).To store embeddings, your Solr schema must define an appropriate vector field, for example:
<field name="embeddings" type="vector" indexed="true" stored="false" multiValued="false"/>
<fieldType name="vector" class="solr.DenseVectorField" vectorDimension="384" required="false" similarityFunction="cosine"/>
Adjust the name
, type
, and vectorDimension
as needed to fit your use-case and model.
POST https://api.opensolr.com/solr_manager/api/embed
Content-Type: application/x-www-form-urlencoded
email=your@email.com&api_key=YOUR_API_KEY&index_name=your_index&payload=Your text to embed here.
email
and api_key
.payload
parameter (must be 2-50,000 characters).{
"status": "success",
"embedding": [/* vector values */],
"length": 4381
}
Or, for invalid input:
{
"ERROR": "Invalid payload"
}
For more information or help, visit Opensolr Support or use your Opensolr dashboard.
embed_opensolr_index
Using the embed_opensolr_index
endpoint involves Solr atomic updates, meaning each Solr document is updated individually with the new embeddings. Atomic updates in Solr only update the fields you include in the update payload—all other fields remain unchanged. However, you cannot generate embeddings from fields that are stored=false
, because Solr cannot retrieve their values for you.
You will not lose stored=false
fields just by running an atomic update. Atomic updates do NOT remove or overwrite fields you do not explicitly update. Data loss of non-stored fields only happens if you replace the entire document (full document overwrite), not during field-level atomic updates.
Because of this, it’s highly recommended to understand the implications of Solr atomic updates clearly. For most users, the safer approach is to create embeddings at indexing time (using the /embed
endpoint), especially if you rely on non-stored fields for downstream features.
Please review the official documentation on Solr Atomic Updates to fully understand these implications before using this endpoint.
schema.xml
<!--VECTORS-->
<field name="embeddings" type="vector" indexed="true" stored="true" multiValued="false" required="false" />
<fieldType name="vector" class="solr.DenseVectorField" vectorDimension="384" similarityFunction="cosine"/>
$ curl -u <INDEX_USERNAME>:<INDEX_PASSWORD> https://<OPENSOLR_INDEX_HOST>solr/<OPENSOLR_INDEX_NAME>/schema/fieldtypes -H 'Content-type:application/json' -d '{
"add-field-type": {
"name": "vector",
"class": "solr.DenseVectorField",
"vectorDimension": 384,
"similarityFunction": "cosine"
}
}'
$ curl -u <INDEX_USERNAME>:<INDEX_PASSWORD> https://<OPENSOLR_INDEX_HOST>solr/<OPENSOLR_INDEX_NAME>/schema/fields -H 'Content-type:application/json' -d '{
"add-field": {
"name":"embeddings",
"type":"vector",
"indexed":true,
"stored":false, // true if you want to see the vectors for debugging
"multiValued":false,
"required":false,
"dimension":384, // adjust to your embedder size
"similarityFunction":"cosine"
}
}'
solrconfig.xml
:<!-- The default high-performance update handler -->
<updateHandler class="solr.DirectUpdateHandler2">
<updateLog>
<int name="numVersionBuckets">65536</int>
<int name="maxNumLogsToKeep">10</int>
<int name="numRecordsToKeep">10</int>
</updateLog>
.....
</updateHandler>
The embed_opensolr_index
endpoint allows Opensolr users to generate and store text embeddings for documents in their Opensolr indexes using a Large Language Model (LLM). These embeddings power advanced features such as semantic search, classification, and artificial intelligence capabilities on top of your Solr data.
https://api.opensolr.com/solr_manager/api/embed_opensolr_index
Supports both GET and POST methods.
Parameter | Type | Required | Description |
---|---|---|---|
string | Yes | Your Opensolr registration email address. | |
api_key | string | Yes | Your API key from the Opensolr dashboard. |
index_name | string | Yes | Name of your Opensolr index/core to be embedded. |
Parameter | Type | Required | Default | Description |
---|---|---|---|---|
emb_solr_fields | string | No | title,description,text | Comma-separated list of Solr fields to embed (can be any valid fields in your index). |
emb_solr_embeddings_field_name | string | No | embeddings | Name of the Solr field to store generated embeddings. |
emb_full_solr_grab | bool | string | No | false | If “yes”, embed all documents in the index; otherwise use pagination parameters below. |
emb_solr_start | integer | No | 0 | Starting document offset (for pagination). |
emb_solr_rows | integer | No | 10 | Number of documents to process in the current request (page size). |
emb_solr_fields
, which defaults to title,description,text
, but you may specify any fields from your index for embedding.emb_solr_embeddings_field_name
to match the embeddings field in your schema.schema.xml
. Example configuration:<field name="embeddings" type="vector" indexed="true" stored="false" multiValued="false"/>
<fieldType name="vector" class="solr.DenseVectorField" vectorDimension="384" required="false" similarityFunction="cosine"/>
embeddings
and vector
with your custom names if you use different field names.Solr atomic updates update only the fields you specify in the update request. Other fields—including those defined as non-stored (stored=false
)—are not changed or removed by an atomic update. However, since non-stored fields cannot be retrieved from Solr, you cannot use them to generate embeddings after indexing time.
If you ever replace an entire document (full overwrite), non-stored fields will be lost unless you explicitly provide their values again.
yes
to embed all documents in the index; otherwise, the endpoint uses pagination.POST https://api.opensolr.com/solr_manager/api/embed_opensolr_index
Content-Type: application/x-www-form-urlencoded
email=your@email.com&api_key=YOUR_API_KEY&index_name=your_index
POST https://api.opensolr.com/solr_manager/api/embed_opensolr_index
Content-Type: application/x-www-form-urlencoded
email=your@email.com&api_key=YOUR_API_KEY&index_name=your_index&emb_solr_fields=title,content&emb_solr_embeddings_field_name=embeddings&emb_full_solr_grab=yes
GET https://api.opensolr.com/solr_manager/api/embed_opensolr_index?email=your@email.com&api_key=YOUR_API_KEY&index_name=your_index
email
and api_key
.index_name
.emb_solr_fields
.emb_solr_embeddings_field_name
.emb_full_solr_grab
is yes
, processes all documents; otherwise uses emb_solr_start
and emb_solr_rows
for batch processing.For more information or help, visit Opensolr Support or use your Opensolr dashboard.
Heads up!
Before you dive into using NLP models with your Opensolr index, please contact us to request the NLP models to be installed for your Opensolr index.
We’ll reply with the correct path to use for the.bin
files in yourschema.xml
orsolrconfig.xml
. Or, if you’d rather avoid all the hassle, just ask us to set it up for you—done and done.
This is your step-by-step guide to using AI-powered OpenNLP models with Opensolr. In this walkthrough, we’ll cover Named Entity Recognition (NER) using default OpenNLP models, so you can start extracting valuable information (like people, places, and organizations) directly from your indexed data.
⚠️ Note:
Currently, these models are enabled by default only in the Germany, Solr Version 9 environment. So, if you want an easy life, create your index there!
We’re happy to set up the models in any region (or even your dedicated Opensolr infrastructure for corporate accounts) if you reach out via our Support Helpdesk.
You can also download OpenNLP default models from us or the official OpenNLP website.
Create your Opensolr Index
Edit Your schema.xml
schema.xml
to edit.Dynamic Field (for storing entities):
<dynamicField name="*_s" type="string" multiValued="true" indexed="true" stored="true" termVectors="true" termPositions="true" termOffsets="true" storeOffsetsWithPositions="true" />
**NLP Tokenizer fieldType:**
<fieldType name="text_nlp" class="solr.TextField" positionIncrementGap="100">
<analyzer>
<charFilter class="solr.MappingCharFilterFactory" mapping="mapping-ISOLatin1Accent.txt"/>
<tokenizer class="solr.OpenNLPTokenizerFactory"
sentenceModel="en-sent.bin"
tokenizerModel="en-token.bin"/>
<filter class="solr.OpenNLPPOSFilterFactory" posTaggerModel="en-pos-maxent.bin"/>
<filter class="solr.OpenNLPChunkerFilterFactory" chunkerModel="en-chunker.bin"/>
<filter class="solr.TypeAsPayloadFilterFactory"/>
</analyzer>
</fieldType>
- **Important:** Don’t use the `text_nlp` type for your dynamic fields! It’s only for the update processor.
Save, then Edit Your solrconfig.xml
updateRequestProcessorChain
(and corresponding requestHandler
):<requestHandler name="/update" class="solr.UpdateRequestHandler" >
<lst name="defaults">
<str name="update.chain">nlp</str>
</lst>
</requestHandler>
<updateRequestProcessorChain name="nlp">
<!-- Extract English People Names -->
<processor class="solr.OpenNLPExtractNamedEntitiesUpdateProcessorFactory">
<str name="modelFile">en-ner-person.bin</str>
<str name="analyzerFieldType">text_nlp</str>
<arr name="source">
<str>title</str>
<str>description</str>
</arr>
<str name="dest">people_s</str>
</processor>
<!-- Extract Spanish People Names -->
<processor class="solr.OpenNLPExtractNamedEntitiesUpdateProcessorFactory">
<str name="modelFile">es-ner-person.bin</str>
<str name="analyzerFieldType">text_nlp</str>
<arr name="source">
<str>title</str>
<str>description</str>
</arr>
<str name="dest">people_s</str>
</processor>
<!-- Extract Locations -->
<processor class="solr.OpenNLPExtractNamedEntitiesUpdateProcessorFactory">
<str name="modelFile">en-ner-location.bin</str>
<str name="analyzerFieldType">text_nlp</str>
<arr name="source">
<str>title</str>
<str>description</str>
</arr>
<str name="dest">location_s</str>
</processor>
<!-- Extract Organizations -->
<processor class="solr.OpenNLPExtractNamedEntitiesUpdateProcessorFactory">
<str name="modelFile">en-ner-organization.bin</str>
<str name="analyzerFieldType">text_nlp</str>
<arr name="source">
<str>title</str>
<str>description</str>
</arr>
<str name="dest">organization_s</str>
</processor>
<!-- Language Detection -->
<processor class="org.apache.solr.update.processor.OpenNLPLangDetectUpdateProcessorFactory">
<str name="langid.fl">title,text,description</str>
<str name="langid.langField">language_s</str>
<str name="langid.model">langdetect-183.bin</str>
</processor>
<!-- Remove duplicate extracted entities -->
<processor class="solr.UniqFieldsUpdateProcessorFactory">
<str name="fieldRegex">.*_s</str>
</processor>
<processor class="solr.RunUpdateProcessorFactory" />
</updateRequestProcessorChain>
Populate Test Data (for the impatient!)
Sample JSON:
{
"id": "1",
"title": "Jack Sparrow was a pirate. Many feared him. He used to live in downtown Las Vegas.",
"description": "Jack Sparrow and Janette Sparrowa, are now on their way to Monte Carlo for the summer vacation, after working hard for Microsoft, creating the new and exciting Windows 11 which everyone now loves. :)",
"text": "The Apache OpenNLP project is developed by volunteers and is always looking for new contributors to work on all parts of the project. Every contribution is welcome and needed to make it better. A contribution can be anything from a small documentation typo fix to a new component.Learn more about how you can get involved."
}
See the Magic!
If any step trips you up, contact us and we’ll gladly assist you—whether it’s model enablement, schema help, or just a friendly chat about Solr and AI. 🤝
Happy Solr-ing & entity extracting!
If you’re uploading or saving configuration files using the Opensolr Editor, you might occasionally be greeted by an error that looks a little something like this:
Error loading class ‘solr.ICUCollationField’
Don’t worry—this doesn’t mean the sky is falling or that your config files have started speaking in tongues.
The error above simply means the ICU (International Components for Unicode) library isn’t enabled on your Opensolr server (yet!). This library is required if your configuration references classes like solr.ICUCollationField
—usually for advanced language collation and sorting.
The solution is delightfully simple: Contact Opensolr Support and request that we enable the ICU library for your server.
A real human (yes, a human!) will flip the right switches for your server, and you’ll be back to uploading config files in no time.
If you’re not sure what sort of error you’re running into—or just want to peek under the hood—you can always check your Error Logs after uploading config files:
You’ll see something like this button in your dashboard:
Check the logs to spot any ICU or other config errors. If it smells like ICU, contact us—if it smells like something else, well… contact us anyway. We’re here to help!
Happy indexing!