Documentation

Select a category on the left, to get your answers quickly

🚦 Opensolr Traffic Bandwidth Limit: Explained

What’s the Deal with the Traffic Bandwidth Limit?

At Opensolr, we don’t count the number of requests you make—because, let’s face it, not all requests are created equal.
Instead, we use a Traffic Bandwidth Limit to keep things fair. You’re only billed (on a pre-paid plan) for the outgoing bytes sent from your Opensolr index to your site or app.

Translation:
- 1 GB of traffic could be a million ultra-efficient requests (if you optimize your queries)
- …or it could be just one monster request (if you don’t).
Yes, size matters!


Why Am I Seeing High Search Traffic Bandwidth?

  • Bots and web crawlers love to visit your site’s search pages—sometimes a little too much.
  • That traffic is then passed on to our servers, and can quickly add up.
  • If your bandwidth seems sky-high overnight, odds are you’re the (un)lucky recipient of a bot party… or maybe even an attack.

🛠️ Solution: Outsmart the Bytes!

Bonus: Opensolr transparently logs every single request. You get full access to see all the action, via: - Our Automation API - The Opensolr Index Control Panel


📊 Real-World Examples

1. API - Logs & Analytics

  • Get, facet, and analyze your requests by any Solr-supported field.
  • Example: facet all results by IP and path—see who’s eating your bandwidth.

Learn more in the API Docs
API Faceting Example


2. Index Control Panel Analytics

  • See metrics on traffic spikes, popular queries, and more.
  • Diagnose what’s hot—and what’s not—on your search.

Read the Analytics Blog Post
Analytics Screenshot 1
Analytics Screenshot 2


3. Tail the Logs Like an Old-School Sysadmin

  • Use your Index Control Panel to view the last 1,000 lines of the live request log.
  • Spot traffic in real time. Block, optimize, or celebrate as needed.
  • Great for identifying bottlenecks, surprise traffic, or just showing off.

Tail Log Screenshot 1
Tail Log Screenshot 2


🥷 Pro Tips (Because You’re Not Just Any Solr User)

  • Bots aren’t going away—get friendly with your logs.
  • Optimize requests, use filters, and cut down on payloads.
  • Share your log horror stories. We’ve all been there.

Want deeper insights or custom advice? Contact our team. We love a good bandwidth optimization challenge!

Importing data from XML into Opensolr

If you were using Solr's DataImport Handler, starting with Solr 9.x that is no longer possible.
Here's how to write a small script that will import data into your Opensolr Index, from XML files:

#!/bin/bash
USERNAME="<OPENSOLR_INDEX_HTTP_AUTH_USERNAME>"
PASSWORD="<OPENSOLR_INDEX_HTTP_AUTH_PASSWORD>"

echo "Starting import on all indexes..."
echo ""

echo "Importing: <YOUR_OPENSOLR_INDEX_NAME>"
echo "Downloading the xml data file"
wget -q <URL_TO_YOUR_XML_FILE>/<YOUR_XML_FILE_NAME>
echo "Removing all data"
curl -s -u $USERNAME:$PASSWORD "https://<YOUR_OPENSOLR_INDEX_HOSTNAME>/solr/<YOUR_OPENSOLR_INDEX_NAME>/update?commit=true&wt=json&indent=true" -H "Content-Type: text/xml" -d "*:*"
echo ""
echo "Uploading and Importing all data into <YOUR_OPENSOLR_INDEX_NAME>"
curl -u $USERNAME:$PASSWORD "https://<YOUR_OPENSOLR_INDEX_HOSTNAME>/solr/<YOUR_OPENSOLR_INDEX_NAME>/update?commit=true&wt=json&indent=true" --progress-bar -H "Content-Type: text/xml" --data-binary @<YOUR_XML_FILE_NAME> | tee -a "/dev/null" ; test ${PIPESTATUS[0]} -eq 0
echo ""
rm -rf <YOUR_XML_FILE_NAME>
echo "Done!"
echo ""
echo ""
echo ""

Now, the way this is made, is that if you have a minimal tech background, you can understand that everything within the <> brackets will have to be replaced with your Opensolr Index Name, your Opensolr Index Hostname, the URL for your XML file, and so forth. You can get all that info in your Opensolr Index Control Panel. Except for the URL to your XML file, which that is hosted somewhere on your end.

The way you format your XML file, is the classic Solr format.
This article may should show you more about the Solr XML Data File format.

🧠 Solr RAM & Memory Management: Best Practices (or, “How Not to Blow Up Your Server”)

Solr is a beast—it loves RAM like a dog loves a steak. If your Solr server is gobbling up memory and crashing, don’t panic! Here’s what you need to know, plus battle-tested ways to keep things lean, mean, and not out-of-memory.


Why Does Solr Use So Much RAM?

Solr eats memory to build search results, cache data, and keep things fast.
But:
- Bad configuration or huge, inefficient requests can cause even the biggest server to choke and burn through RAM. - Sometimes, small indexes on giant machines will still crash if your setup isn’t right. - Good news: Opensolr has self-healing—if Solr crashes, it’ll be back in under a minute. Still, prevention is better than panic.


🔧 Essential Best Practices

1. Save Transfer Bandwidth (and Memory)

Want to save bandwidth and RAM? Read these tips.
Optimizing your queries is a win-win: less data in and out, and less stress on your server.


2. Don’t Ask Solr to Return 10 Million Results

  • Requesting thousands of docs in one go?
    That makes Solr allocate all that data, and cache it, too.
  • Solution: Keep the rows parameter below 100 for most queries.
    Example:
    &rows=100

3. Paginate Responsibly (Or: Don’t Scroll to Infinity)

  • If you’re paginating over millions of docs (like &start=500000&rows=100), Solr has to allocate a ton of memory for all those results.
  • Solution: Try to keep start under 50,000 if possible.
  • The more stored fields you have in your schema, the more RAM will be used for large paginations.

4. Heavy Faceting, Sorting, Highlighting, or Grouping? Use docValues=true

  • Operations like faceting, sorting, highlighting, and grouping can be memory hogs.
  • Solution: Define your fields with docValues="true" in schema.xml.
  • Example:
    xml <field name="name" docValues="true" type="text_general" indexed="true" stored="true" />

  • For highlighting, you may want even more settings:
    xml <field name="description" type="text_general" indexed="true" stored="true" docValues="true" termVectors="true" termPositions="true" termOffsets="true" storeOffsetsWithPositions="true" />


5. Don’t Go Cache-Crazy

Solr caches are great… until they eat all your memory and leave nothing for real work.

  • The big four:

    • filterCache: stores document ID lists for filter queries (fq)
    • queryResultCache: stores doc IDs for search results
    • documentCache: caches stored field values
    • fieldCache: stores all values for a field in memory (dangerous for big fields!)
  • Solution: Tune these in solrconfig.xml and keep sizes low.

  • Example:
    xml <filterCache size="1" initialSize="1" autowarmCount="0"/>

6. Using Drupal?


🤓 Final Wisdom

  • RAM is precious. Don’t let Solr treat it like an all-you-can-eat buffet.
  • Optimize requests, paginate wisely, and keep configs tight.
  • If Solr OOMs (“Out of Memory”)—Opensolr’s got your back, but wouldn’t you rather avoid the drama?

Questions? Want a config review or more tips? Contact the Opensolr team!

🧠💥 Solr JVM Tuning RAM & Memory Management

Solr’s RAM appetite is legendary. Don’t worry, you’re not alone. Let’s help you keep your heap happy, your queries snappy, and your boss off your back.


🤔 Why Does Solr Use So Much Memory?

  • Search results: Returns tons of docs? RAM feast.
  • Caches: Four flavors, all with big appetites.
  • Big fields, bad configs, massive requests: Boom—there goes your heap.
  • Solr: “Give me RAM, and I shall give you… maybe some results.”

🛠️ Best Practices, in Style

1. Save Bandwidth, Save RAM

Fewer bytes → less RAM.
See our bandwidth tips.


2. Limit the rows Parameter!

Don’t return all the docs unless you want Solr to host a BBQ in your memory.

&rows=100

3. Paginate Responsibly

Huge start values = huge RAM usage.
Try not to cross start=50000 unless you really like chaos.


4. docValues or Bust

Faceting, sorting, grouping, highlighting:

<field name="my_field" docValues="true" type="string" indexed="true" stored="true"/>

5. Cache, but Not Like a Hoarder

Tighten up your caches in solrconfig.xml.

<filterCache size="1" initialSize="1" autowarmCount="0"/>

Monitor cache hit ratios; <10% = wasted RAM.


6. JVM Heap: Not a Dumpster, Not a Bathtub

  • Heap size:
    For most, 4g or 8g is enough.
    -Xms4g -Xmx4g
  • Garbage Collector:
    Use G1GC (modern, less “stop the world”). -XX:+UseG1GC
  • GC Tuning:
    For Solr 8+: -XX:+UseStringDeduplication -XX:MaxGCPauseMillis=200
  • Monitor:
    If your GC logs show frequent full GCs, it’s time to optimize.
    Enable GC logging for real insight: -Xlog:gc*:file=/var/solr/gc.log:time,uptime,level,tags:filecount=10,filesize=10M

7. Watch the Heap & GC

  • In Solr Admin UI, watch for heap >85% or long GC pauses.
  • If your server pauses for coffee breaks, that’s bad news.

8. Index Analytics & Log Watching

  • Use the Opensolr Analytics panel to see who/what is eating RAM.
  • Tail your logs and spot traffic spikes—don’t wait for support to call you.

9. Drupal + Solr = PATCH NOW

Keep Search API Solr current or face the wrath of bugs.


🎯 TL;DR Pro Tips

  • Limit rows and start.
  • Use docValues for anything you facet, sort, or group.
  • Cache like you’re paying rent by the megabyte.
  • Tune JVM heap and GC for your workload, not someone else’s.
  • Watch logs, heap, and GC stats.
  • Patch integrations, always.

🧑‍🔬 JVM Tuning Quick Reference

JVM Option What It Does Default/Example
-Xms / -Xmx Min/Max heap size -Xms4g -Xmx4g
-XX:+UseG1GC Use the G1 Garbage Collector Always for Java 8+
-XX:MaxGCPauseMillis=200 Target max GC pause time (ms) -XX:MaxGCPauseMillis=200
-XX:+UseStringDeduplication Remove duplicate strings in heap Java 8u20+
-Xlog:gc* GC logging See above
-XX:+HeapDumpOnOutOfMemoryError Write heap dump on OOM Always!
-XX:HeapDumpPath=/tmp/solr-heapdump.hprof Path for OOM heap dump Set to a safe disk

🤪 Meme Zone: Solr Memory Edition

Solr Heap Meme
“How many docs can I return? Solr: Yes.”


🤝 When to Call for Backup

  • Heap usage feels like the national debt
  • Solr restarts become your afternoon coffee break
  • JVM heap dumps are bigger than your backup drive

👉 Contact Opensolr Support — bring logs, configs, and memes. We love a challenge.

✨ Enable Spellcheck in Solr (Because Spelling Is Hard)

Enabling spellcheck in Apache Solr is like giving your users a helpful nudge whenever they make a typo—because we all know “seach” is not “search.”
Here’s how to get those “Did you mean…?” suggestions working for your queries!


📝 Step 1: Schema Configuration

  1. Edit your schema.xml (in your Solr core’s conf directory):
  2. Define a field type for spellcheck:
<fieldType name="textSpell" class="solr.TextField" positionIncrementGap="100">
   <analyzer type="index">
       <tokenizer class="solr.StandardTokenizerFactory"/>
       <filter class="solr.LowerCaseFilterFactory"/>
   </analyzer>
   <analyzer type="query">
       <tokenizer class="solr.StandardTokenizerFactory"/>
       <filter class="solr.LowerCaseFilterFactory"/>
   </analyzer>
</fieldType>
  1. Define your content field and a spellcheck field:
<field name="content" type="textSpell" indexed="true" stored="true"/>
<field name="spell" type="textSpell" indexed="true" stored="false" multiValued="true"/>

⚙️ Step 2: Solr Configuration

  1. Edit your solrconfig.xml (in your Solr core’s conf directory).
  2. Find the <requestHandler> for /select and add the spellcheck component:
<requestHandler name="/select" class="solr.SearchHandler">
   <!-- ... -->
   <arr name="last-components">
       <str>spellcheck</str>
   </arr>
</requestHandler>

🔮 Step 3: Spellcheck Component Configuration

Still in solrconfig.xml, define your <searchComponent> for spellcheck:

<searchComponent name="spellcheck" class="solr.SpellCheckComponent">
   <lst name="spellchecker">
       <str name="name">default</str>
       <str name="field">spell</str>
       <str name="classname">solr.DirectSolrSpellChecker</str>
       <str name="distanceMeasure">internal</str>
       <float name="accuracy">0.5</float>
       <int name="maxEdits">2</int>
       <int name="minPrefix">1</int>
       <int name="maxInspections">5</int>
       <int name="minQueryLength">3</int>
       <float name="maxQueryFrequency">0.5</float>
   </lst>
</searchComponent>

Pro tip: You can tune these parameters based on your data and performance needs. For instance, more “maxEdits” means more generous suggestions, but potentially more noise!


♻️ Step 4: Reindex Your Data

After any schema/config changes, reindex your content.
Otherwise, your spellcheck dictionary will be lonely and unhelpful.


🧑‍💻 Step 5: Querying with Spellcheck

When making a search query, simply add the spellcheck parameter:

/select?q=your_query&spellcheck=true

You’ll get spellcheck suggestions in your Solr response, usually under the "spellcheck" section.
Voilà! No more missed searches due to typos. 🎉


💡 Bonus Tips

  • Spellcheck is a game-changer for user experience—but don’t overdo it. Too many suggestions can be distracting.
  • Always test with real-world typo examples (we all have a user who types “bannana” instead of “banana”).
  • Tweak your spellchecker for speed vs. accuracy—there’s always a balance.

Now your Solr is smart enough to fix “teh” into “the.” Happy searching! 🪄

🧠 Using NLP Models in Your Solr schema_extra_types.xml

Leverage the power of Natural Language Processing (NLP) right inside Solr!
With built-in support for OpenNLP models, you can add advanced tokenization, part-of-speech tagging, named entity recognition, and much more—no PhD required.


🚀 Why Use NLP Models in Solr?

Integrating NLP in your schema allows you to:

  • Extract nouns, verbs, or any part-of-speech you fancy.
  • Perform more relevant searches by filtering, stemming, and synonymizing.
  • Create blazing-fast autocomplete and suggestion features via EdgeNGrams.
  • Support multi-language, linguistically smart queries.

In short: your Solr becomes smarter and your users get better search results.


⚙️ Example: Dutch Edge NGram Nouns Field

Here’s a typical fieldType in your schema_extra_types.xml using OpenNLP:

<fieldType name="text_edge_nouns_nl" class="solr.TextField" positionIncrementGap="100">
  <analyzer type="index">
    <tokenizer class="solr.OpenNLPTokenizerFactory" sentenceModel="/opt/nlp/nl-sent.bin" tokenizerModel="/opt/nlp/nl-token.bin"/>
    <filter class="solr.OpenNLPPOSFilterFactory" posTaggerModel="/opt/nlp/nl-pos-maxent.bin"/>
    <filter class="solr.TypeTokenFilterFactory" types="pos_edge_nouns_nl.txt" useWhitelist="true"/>
    <filter class="solr.LowerCaseFilterFactory"/>
    <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
    <filter class="solr.EdgeNGramFilterFactory" minGramSize="2" maxGramSize="25"/>
  </analyzer>
  <analyzer type="query">
    <tokenizer class="solr.OpenNLPTokenizerFactory" sentenceModel="/opt/nlp/nl-sent.bin" tokenizerModel="/opt/nlp/nl-token.bin"/>
    <filter class="solr.OpenNLPPOSFilterFactory" posTaggerModel="/opt/nlp/nl-pos-maxent.bin"/>
    <filter class="solr.TypeTokenFilterFactory" types="pos_edge_nouns_nl.txt" useWhitelist="true"/>
    <filter class="solr.LowerCaseFilterFactory"/>
    <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms_edge_nouns_nl.txt"/>
    <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
  </analyzer>
</fieldType>

🔎 Important Details

  • Model Paths:
    Always reference the full absolute path for NLP model files. For example: sentenceModel="/opt/nlp/nl-sent.bin" tokenizerModel="/opt/nlp/nl-token.bin" posTaggerModel="/opt/nlp/nl-pos-maxent.bin" This ensures Solr always finds your precious language models—no “file not found” drama!

  • Type Token Filtering:
    The TypeTokenFilterFactory with useWhitelist="true" will only keep tokens matching the allowed parts of speech (like nouns, verbs, etc.), as defined in pos_edge_nouns_nl.txt. This keeps your index tight and focused.

  • Synonym Graphs:
    Add SynonymGraphFilterFactory to enable query-side expansion. This is great for handling multiple word forms, synonyms, and local lingo.


🧑‍🔬 Best Practices & Gotchas

  • Keep your NLP model files up to date and tested for your language version!
  • If using multiple languages, make sure you have the right models for each language. (No, Dutch models won’t help with Klingon. Yet.)
  • EdgeNGram and NGram fields are fantastic for autocomplete—but don’t overdo it, as they can bloat your index if not tuned.
  • Use RemoveDuplicatesTokenFilterFactory to keep things clean and efficient.

🌍 Not Just for Dutch!

You can set up similar analyzers for English, undefined language, or anything you like. For example:

<fieldType name="text_nouns_en" class="solr.TextField" positionIncrementGap="100">
  <analyzer type="index">
    <tokenizer class="solr.OpenNLPTokenizerFactory" sentenceModel="/opt/nlp/en-sent.bin" tokenizerModel="/opt/nlp/en-token.bin"/>
    <filter class="solr.OpenNLPPOSFilterFactory" posTaggerModel="/opt/nlp/en-pos-maxent.bin"/>
    <filter class="solr.TypeTokenFilterFactory" types="pos_nouns_en.txt" useWhitelist="true"/>
    <filter class="solr.LowerCaseFilterFactory"/>
    <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
  </analyzer>
  <analyzer type="query">
    <tokenizer class="solr.OpenNLPTokenizerFactory" sentenceModel="/opt/nlp/en-sent.bin" tokenizerModel="/opt/nlp/en-token.bin"/>
    <filter class="solr.OpenNLPPOSFilterFactory" posTaggerModel="/opt/nlp/en-pos-maxent.bin"/>
    <filter class="solr.TypeTokenFilterFactory" types="pos_nouns_en.txt" useWhitelist="true"/>
    <filter class="solr.LowerCaseFilterFactory"/>
    <filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms_nouns_en.txt"/>
    <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
  </analyzer>
</fieldType>

📦 Keep It Organized

  • Store all model files in a single, logical directory (like /opt/nlp/), and keep a README so you know what’s what.
  • Protect those models! They’re your “brains” for language tasks.

🛠️ Wrap-up

Using NLP models in your Solr analyzers will supercharge your search, make autocomplete smarter, and help users find what they’re actually looking for (even if they type like my cat walks on a keyboard).

Need more examples?
Check out the Solr Reference Guide - OpenNLP Integration or Opensolr documentation.


Happy indexing, and may your tokens always be well-typed! 😸🤓

📦 How to Upload Solr Configuration Files (Like a Pro!)

Solr thrives on configuration files—each with its own special job.
Whether you’re running a classic Solr install, a CMS like Drupal, or even going rogue with WordPress and WPSOLR, proper configuration is key.


🤓 Why Does the Order Matter?

Solr configurations often reference each other (think: dependencies). If you upload them in the wrong order, you’ll get errors, failed indexes, and possibly even a mild existential crisis.


🚦 The “Three Archive” Method (aka Solr Zen)

When uploading your Solr config files via the Opensolr Index Control Panel, follow this foolproof order:

  1. Dependencies First!
    Create and upload a .zip containing all dependency files (such as .txt files, schema-extra.xml, solrconfig-extra.xml, synonyms, stopwords, etc).
    Basically, everything except the main schema.xml and solrconfig.xml.

  2. Schema Second!
    Zip and upload just your schema.xml file.
    This file defines all fields and refers to resources from the previous archive.

  3. solrconfig Last!
    Finally, zip and upload your solrconfig.xml file.
    This references your schema fields and ties all the magic together.

In summary:
1️⃣ Dependencies → 2️⃣ schema.xml → 3️⃣ solrconfig.xml


⚡️ Can I Automate This?

Absolutely!
Use the Opensolr Automation REST API to upload your config files programmatically.
Because, let’s face it, real wizards script things.


📝 Pro Tips

  • Always double-check references between config files!
  • If you’re using a CMS, look for community best practices on managing Solr configs.
  • Feeling unsure? Upload one at a time, in the order above, and test after each.

Now go forth and upload with confidence! 🦾

🧩 Using the AutoPhrase TokenFilter JAR in Opensolr

The AutoPhrase TokenFilter is a powerful Solr plugin that helps you recognize and index multi-word expressions as single tokens (think: “New York City” as one unit, not three). This can significantly improve the quality of search, autocomplete, and analytics.


⚡️ Is It Enabled by Default?

Not on all Opensolr environments!
If you’re trying to use the AutoPhraseTokenFilterFactory and see errors like:

Plugin not found: solr.AutoPhraseTokenFilterFactory

…then the jar isn’t active on your server (yet).


🛠️ What To Do?

  1. Contact Us
    Simply send us a request and we’ll install the AutoPhrase library (or pretty much any other custom Solr plugin) for you.

  2. How to Request a Plugin

  3. Follow the step-by-step guide: How do I add a lib/jar file?
  4. Let us know which version of Solr you’re using (the right jar version matters!).
  5. Optionally, send the JAR file directly if it’s a custom or non-public library.

  6. After Installation

  7. Once the plugin is in place, add the appropriate <filter class="solr.AutoPhraseTokenFilterFactory" ... /> element to your field type in schema.xml.
  8. Reload your core to activate the new filter.
  9. Don’t forget to update your schema or config if needed—AutoPhrase sometimes requires its own config files or phrase lists.

🚨 Gotchas & Tips

  • Version Compatibility: Always use a plugin version that matches your Solr version.
  • Security: Opensolr reviews all uploaded JARs for security reasons—public/official plugins are easier/faster to approve!
  • Performance: Heavy custom token filters (like AutoPhrase) can impact indexing speed. Test with your real data!

🔍 Learn More


Questions? Contact Opensolr Support — we’re happy to help!

(If you’re a plugin power user, give us a heads up and we’ll have your Solr instance doing backflips in no time. 🕺)

If you keep getting redirected to the Login page, or you are having troubles with placing a new order, after trying to login multiple times, please try to clear the opensolr cookies, or use a different browser.

   

🏗️ Using Custom JAR Libraries in Opensolr

Need a special Solr plugin or custom filter?
No problem! Opensolr supports custom JAR libraries—so you can fine-tune your search platform with advanced features.


🚚 How to Install a Custom JAR Library

  1. Send Us Your JAR
    Email your custom JAR file (or a link to the official plugin page where binaries are already compiled) to support@opensolr.com.

  2. Include This Info

  3. Your Opensolr Registration Email Address
  4. The Opensolr Index Name (where you want the plugin installed)

  5. Installation Timeline

  6. Most installations are done within a couple of hours (we say “up to 24 hours” to cover rare edge cases and to sound like responsible adults).
  7. If the plugin is fully compatible with your Solr version, it’s usually lightning fast!

🛡️ Pro Tips for Success

  • Send the JAR File Itself
    Don’t just send the source code. We need the compiled .jar binary!
  • Official Sources Are Best
    For security and speed, send links to official or reputable plugin pages.
  • Version Match Matters
    Double-check that your JAR matches your Solr version—otherwise it might throw errors (or, even worse, not work at all).

🔄 After Installation

Once we’ve installed the plugin: - Update your schema.xml or solrconfig.xml to use your new library (we can help with this if needed). - Reload your Solr core to activate the changes. - Test your configuration—give it a spin!


Questions? Stuck?
Email support@opensolr.com and our tech team will leap into action (well, at least open their laptops and get right on it).


With Opensolr, you’re never stuck with just the basics. Power up your index—your way! ⚡️

Click on the Tooks Menu Item on the right hand side, and then simply use the form to create your query and delete data.

To move from using the managed-schema to schema.xml, simply follow the steps below:

In your solrconfig.xml file, look for the schemaFactory definition.If you have one, remove it and add this instead:

<schemaFactory class="ClassicIndexSchemaFactory"/>

If you don't have it just add the above snippet somewhere above the requestHandlers definitions. 

 

To move from using the classic schema.xml in your opensolr index, to the managed-schema simply follow the steps below:

In your solrconfig.xml, look for a SchemaFactory definition, and replace it with this snippet:

   <schemaFactory class="ManagedIndexSchemaFactory">
      <bool name="mutable">true</bool>
      <str name="managedSchemaResourceName">managed-schema</str>
   </schemaFactory>

If you don't have any schemaFactory definition, just paste the above snippet to your solrconfig.xml file, just about any requestHandler definition.

📦 Solr Version Freedom at Opensolr

Opensolr now supports any Solr version your project could dream of! 🎉

Solr Versions provided by Opensolr.com


🦸 Why Is This a Big Deal?

  • Legacy Project? Running Solr 4.10 from the good ol’ days? No problem.
  • Latest & Greatest? Need Solr 9.x with all the cutting-edge bells and whistles? Covered!
  • Migration? Want to test upgrades on a staging index before you go live? We make it easy.

🚀 Version Highlights

  • Zero Lock-in: Move between Solr versions as your business evolves—no need to migrate off Opensolr, ever!
  • Multiple Versions, Side-by-Side: Test and deploy multiple versions at the same time, all under one account.
  • Expert Support: Unsure which version to pick? Our Solr ninjas are here to advise (and talk you out of running Solr 1.4 in production…).

🏗️ Use Cases

  • Dev/Test Environments: Try out new features on Solr 9.x while your production index cruises safely on 8.x.
  • Backward Compatibility: Keep that legacy integration happy with the version it needs, while you plan upgrades.
  • Smooth Upgrades: Clone your index, test migration paths, and upgrade with zero downtime or heartburn.

💡 Pro Tips

  • Always check compatibility with your apps, connectors, and plugins before switching versions.
  • You can request custom Solr builds or even bleeding-edge snapshots for adventurous projects.
  • Mix and match versions across your different indexes, for ultimate flexibility.

With Opensolr, your project’s Solr version is never a limitation—it’s a superpower! 🦾

Contact Opensolr Support to spin up any version, or just ask us which one makes sense for your needs!

Please go to https://opensolr.com/pricing and make sure you select the analytics option from the extra features tab, when you upgrade your account. 

If you can see analytics but no data, make sure your solr queries are correctly formated in the form:
https://server.opensolr.com/solr/index_name/select?q=your_query&other_params... 

So, the search query must be clearly visible in the q parameter in order for it to show in analytics. 

💾 How to Save Your Monthly Bandwidth Like a Pro

Bandwidth: you don’t notice it… until you run out. Here’s how to keep your Opensolr search snappy without burning through your monthly gigabytes.


🧠 Smart Bandwidth Hacks

  1. 🗃️ Use Local Caching (e.g., Memcache or Redis)
  2. Cache your search results locally so you don’t have to hit Solr for every page reload, autocomplete, or back-button click.
  3. Result: fewer requests, happier users, and much lower bandwidth usage!

  4. 🔄 Solr Replication Magic

  5. Bandwidth is per-index. So, if you want to double (or triple) your available bandwidth, set up Solr Replication.
  6. Create Index A, replicate to Index B, then have your application perform round-robin queries across both. Voila! Twice the bandwidth pool for your heavy-hitting queries.
  7. (Think of it as the “BOGO” deal for bandwidth.)

  8. 🎯 Return Only What You Need

  9. Tweak your /select requests using the rows and fl parameters to only fetch the records and fields you truly need.
  10. Example:
    /select?q=mysearch&rows=10&fl=id,title
    Don’t pull the whole database just because you can—every extra byte eats into your bandwidth.

💡 Extra Tips

  • Audit Your Queries: Are you returning 100,000 records and only displaying 10? That’s a lot of wasted bandwidth.
  • Compress Responses: If your app supports it, enable gzip compression for Solr responses.
  • Monitor Usage: Use Opensolr’s control panel or logs to see which queries are your biggest bandwidth hogs—and optimize accordingly.

Master these tricks and your bandwidth will go further, your bills will shrink, and your search users will never know you’ve become a traffic ninja. 🥷

Need more ideas

🏋️‍♂️ Trading Performance for Index Size: The Art of a Leaner Solr

Sometimes, you’ve got to make a trade: a bit less speed for a lot less disk space.
Here’s how you can shrink your Solr index like a pro (and keep your server from bursting at the seams):


⚡ Field Types: Small Tweaks, Big Savings

  • Go for int instead of tint
    Using an int field takes up less space than a trie integer (tint).
    But beware!
    Range queries on int will be slower than tint. (It’s a classic “pick two out of three” scenario: fast, small, cheap.)

🔬 Time for a Field Audit!

  • Take a hard look at your fields.
    Sometimes, to get a slimmer index, you need to be ruthless.

  • Are you hoarding stored fields?
    If you’ve got lots of stored fields, try this power move:

  • Remove stored fields from Solr.
  • Query your main database for details after Solr gives you the results.
  • Your index (and your disk) will thank you!

🧠 Schema Jedi Tricks

  • Add omitNorms="true"
    On text fields that don’t need length normalization.
    (Translation: If you don’t care about short/long document bias, ditch the norms and reclaim space!)

  • Add omitPositions="true"
    On text fields that don’t require phrase matching.
    (You lose phrase search on those fields, but win back precious bytes.)

  • Beware the NGram monster!
    Special fields like NGrams can gobble up a ton of space.

  • Use them only where necessary.
  • Regular text fields will do the trick in most cases.

🛑 Stop Words: Use ’Em!

  • Are you removing stop words from text fields?
    Common words like “the,” “and,” and “of” just take up space and slow down searching.
  • Remove them at index time and keep your index mean and lean!

✨ Final Pro Tips

  • Every byte counts in large-scale Solr.
  • Think before you index: Will I ever search/filter on this field?
  • Regular “field cleanups” can save money and headaches.

Shrink smart, and may your search be speedy and your indexes svelte! 🚀🗜️

🚀 OpenSolr: Cloud-Powered Search With a Human Touch

OpenSolr is more than just a place to host your Apache Solr instance—it’s your full-service, hands-off search infrastructure butler, working around the clock so you don’t have to! Here’s what makes OpenSolr the trusted choice for devs and businesses worldwide:


🌐 What Is OpenSolr?

OpenSolr is a cloud-based search service that takes all the hassle out of hosting, scaling, and managing Apache Solr, the legendary open-source search platform known for:

  • 🔍 Blazing fast full-text search
  • 💡 Hit highlighting
  • 🧩 Faceted navigation
  • 🪄 Dynamic clustering
  • 📄 Rich document support

🏆 Why Choose OpenSolr? Key Benefits

  1. 🛠️ Managed Solr Hosting
    Let OpenSolr handle the dirty work—setup, upgrades, security patches, scaling—so you can focus on what matters: building awesome stuff.

  2. 📈 Scalability & Performance
    Need to handle millions of searches? No sweat. OpenSolr lets you ramp up or down in seconds, delivering reliable performance at any scale.

  3. 🔒 Data Security & Backups
    Rest easy with industry-standard SSL encryption, regular data backups, and built-in recovery tools. Your data’s safe, come rain or ransomware.

  4. ⚙️ Customizable Search Indexes
    Define your own schemas, play with analyzers, import data your way. It’s Solr, but without the migraine.

  5. 🖥️ User-Friendly Control Panel
    Forget the CLI—manage, monitor, and tweak your search environment in a slick web interface. Analytics, logs, real-time stats—one click away.

  6. 🙋 Rockstar Support & Consulting
    Stuck? OpenSolr’s experts are on standby, offering guidance, troubleshooting, and performance tips. (We don’t judge your config typos.)

  7. 🔌 Easy Integration & APIs
    Plug OpenSolr into your e-commerce platform, CMS, data warehouse, or even your secret AI project. REST APIs and connectors included!

  8. 🌍 Global Data Centers
    Your users are everywhere—so is OpenSolr. Pick the region closest to you for lightning-fast, reliable service worldwide.


💡 Who Uses OpenSolr?

Anyone who wants powerful, scalable, professional search without the burden of self-hosting: - E-commerce stores 🛒 - Content management systems 📝 - News & media websites 🗞️ - SaaS products ☁️ - Any data-hungry app that needs to search like a boss!


OpenSolr: Where world-class search meets old-school reliability (and a dash of wit).

Ready to search smarter?
Sign up for a free trial!

EZcmd.com is a useful set of GeoData and GeoIP utilities.

Here are a few screenshots

GEOIP Tools GEOIP Tools GEOIP Tools GEOIP Tools






Review us on Google Business
ISO-9001 CERTIFIED ISO-27001 CERTIFIED