{"id":6691,"date":"2013-01-10T15:29:27","date_gmt":"2013-01-10T14:29:27","guid":{"rendered":"http:\/\/blog.trifork.nl\/?p=6691"},"modified":"2013-01-10T15:29:27","modified_gmt":"2013-01-10T14:29:27","slug":"how-to-write-an-elasticsearch-river-plugin","status":"publish","type":"post","link":"https:\/\/trifork.nl\/blog\/how-to-write-an-elasticsearch-river-plugin\/","title":{"rendered":"How to write an elasticsearch river plugin"},"content":{"rendered":"<p><img decoding=\"async\" style=\"float: right;width: 35%;height: 35%\" alt=\"\" src=\"http:\/\/www.pilato.fr\/rssriver\/images\/logo-icon.png\" \/>Up until now I told you why I think <a href=\"http:\/\/elasticsearch.org\" target=\"_blank\" rel=\"noopener\">elasticsearch<\/a> is <a href=\"http:\/\/blog.trifork.nl\/2012\/09\/25\/whats-so-cool-about-elasticsearch\/\">so cool<\/a> and how you can use it <a href=\"http:\/\/blog.trifork.nl\/2012\/09\/13\/elasticsearch-beyond-big-data-running-elasticsearch-embedded\/\">combined with Spring<\/a>. It\u2019s now time to get to something a little more technical. For example, once you have a search engine running you need to index data; when it comes to indexing data you usually need to choose between the push and the pull approach. This blog entry will detail these approaches and goes into writing a river plugin for elasticsearch.<\/p>\n<p><!--more-->Implementing the push approach means writing your own indexer using your favourite programming language and pushing data to the search engine through some client library or even sending REST requests to it.<\/p>\n<p>On the other hand, implementing the pull approach with elasticsearch means writing a special type of plugin, also known as <a href=\"http:\/\/www.elasticsearch.org\/guide\/reference\/river\/\">river<\/a>, which will pull data from a data source and index it. The data source can be whatever system you can get data from: the file system, a database, and so on.<\/p>\n<p>While the push approach is the most flexible one, the river is a nice and standard way to distribute an indexer as a plugin. You then run it (at your own risk) in elasticsearch itself, without the need to start up a separate application\/process. And if you are a Java developer, it makes even less difference since you are probably going to use the <a href=\"http:\/\/www.elasticsearch.org\/guide\/reference\/java-api\/\" target=\"_blank\" rel=\"noopener\">elasticsearch Java API<\/a> to index data and interact with it, either writing the separate indexer or the river. In the end the code is going to be pretty much the same except for the bootstrap part.<\/p>\n<h3>But you may still wonder, what\u2019s better? Push or pull? My answer is: it depends!<\/h3>\n<p>If you don\u2019t want to use Java, or if you want to have complete control over the indexing process and write a specific piece of software for your own needs, then the push approach seems to be the way to go.<br \/>\nBut if you\u2019re a Java guy and would like to write a generic indexer which other users can benefit from, or if you just don\u2019t want to bother maintaining a separate application for it, the river is a good choice. There are already quite some <a href=\"http:\/\/www.elasticsearch.org\/guide\/reference\/modules\/plugins.html\">elasticsearch rivers available<\/a>. For example, you can give the <a href=\"https:\/\/github.com\/jprante\/elasticsearch-river-jdbc\">JDBC river<\/a> a try if you want to index data taken from a database using JDBC, or you can use the <a href=\"https:\/\/github.com\/elasticsearch\/elasticsearch-river-twitter\">Twitter river<\/a> to import data from Twitter.<\/p>\n<h3>Getting started<\/h3>\n<p>I recently wrote the <a href=\"https:\/\/github.com\/javanna\/elasticsearch-river-solr\/\">Solr river<\/a> which I\u2019m going to use as an example for this blogpost in order to show you what the steps needed to write a river are. The Solr river allows you to easily import data from a running Solr instance. Despite the <a href=\"http:\/\/www.elasticsearch.org\/blog\/2010\/09\/28\/the_river.html\">initial concept behind elasticsearch rivers<\/a> was to handle a stream of constant data, like for example the Twitter river does using the Twitter streaming API, the Solr river is not meant to keep Solr in sync with Elasticsearch (why would you do that?), but only to import data once and start working with elasticsearch in no time.<\/p>\n<p>Elasticsearch uses <a href=\"http:\/\/code.google.com\/p\/google-guice\/\">Google Guice<\/a> as a dependency injection framework. You don\u2019t need to know a lot about it to write a plugin but I\u2019d suggest to have a look at its <a href=\"http:\/\/code.google.com\/p\/google-guice\/wiki\/GettingStarted\">Getting Started page<\/a> if you haven\u2019t heard about it. Also remember that Guice is one of those dependencies that are shared with elasticsearch itself. The version used at the time of writing is 2.0.<\/p>\n<p>Since we are writing a plugin, the first step is to write a class that implements the Plugin interface. That\u2019s even easier if we extend the AbstractPlugin class that handles some boilerplate code for us. We need to implement the name and description methods in order to provide a name and a description for the plugin.<\/p>\n<pre class=\"brush: java; title: ; notranslate\" title=\"\">\n    @Override\n    public String name() {\n        return &quot;river-solr&quot;;\n    }\n\n    @Override\n    public String description() {\n        return &quot;River Solr plugin&quot;;\n    }\n<\/pre>\n<p>After that we need to register our plugin\u2019s components. What do I mean by this? Every plugin adds some features to the system, and those features need to be registered so that the system knows about them. You can do it through Guice. While loading all plugins, elasticsearch invokes (via reflection) a method called <code>onModule<\/code> with a parameter that extends the <code>AbstractModule<\/code> class. The <code>AbstractModule<\/code> class is actually the base class for any Guice module, which contains a collection of bindings. We are writing a river, therefore we need to write the following method that receives the <code>RiversModule<\/code> as input, where we can register our river:<\/p>\n<pre class=\"brush: java; title: ; notranslate\" title=\"\">\npublic void onModule(RiversModule module) {\n    module.registerRiver(&quot;solr&quot;, SolrRiverModule.class);\n}\n<\/pre>\n<p>But the <code>PluginsService<\/code> gives us as input the module that we need for our plugin, which isn\u2019t always the <code>RiversModule<\/code>. For example, if we wanted to write a new analyzer, the <code>onModule<\/code> method would have looked like this:<\/p>\n<pre class=\"brush: java; title: ; notranslate\" title=\"\">\npublic void onModule(AnalysisModule module) {\n    module.addAnalyzer(&quot;new-analyzer&quot;, NewAnalyzerProvider.class);\n}\n<\/pre>\n<p>while if we wanted to write a plugin that adds a new scripting engine, the <code>onModule<\/code> would have been like the following:<\/p>\n<pre class=\"brush: java; title: ; notranslate\" title=\"\">\npublic void onModule(ScriptModule module) {\n    module.addScriptEngine(NewScriptEngineService.class);\n}\n<\/pre>\n<p>Finally, if we wanted to write a plugin that adds a new REST endpoint to elasticsearch, here is the needed <code>onModule<\/code> method:<\/p>\n<pre class=\"brush: java; title: ; notranslate\" title=\"\">\npublic void onModule(RestModule module) {\n    module.addRestAction(NewRestAction.class);\n}\n<\/pre>\n<p>Back to our Solr river, what is the <code>SolrRiverModule<\/code> that we registered to the <code>RiversModule<\/code>? That\u2019s our specific Guice module. It contains all the bindings required for our plugin. For the Solr river (and most of the plugins) we just need to load the <code>SolrRiver<\/code> class which contains, as we\u2019ll see in a moment, the code for our river and implements the <code>River<\/code> interface.<\/p>\n<pre class=\"brush: java; title: ; notranslate\" title=\"\">\npublic class SolrRiverModule extends AbstractModule {\n    @Override\n    protected void configure() {\n        bind(River.class).to(SolrRiver.class).asEagerSingleton();\n    }\n}\n<\/pre>\n<p>The above line of code just tells Guice that for this module the River class implementation in use will be <code>SolrRiver<\/code>. We are also asking Guice to eagerly initialize our class. This means that the <code>SolrRiver<\/code> instance will be created during Guice initialization and not only when needed. As a result, we\u2019ll know immediately if there are problems while creating the object.<\/p>\n<p>We are ready to have a look at the river code, but first I need to show you something else. How does elasticsearch know to load the initial SolrRiverPlugin class? Some kind of component scanning would be nice here, but that\u2019s not available with guice, neither with elasticsearch. What we need to do is put a file called <b>es-plugin.properties<\/b> on the classpath. It needs to contain the following line, which tells elasticsearch what class to load in order to start the plugin:<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nplugin=org.elasticsearch.plugin.river.solr.SolrRiverPlugin\n<\/pre>\n<p>A river implements the <code>River<\/code> interface and usually extends the <code>AbstractRiverComponent<\/code> too. The <code>River<\/code> interface contains only two methods: start, called when the river is started (which can be either when you register the river or when the node on which the river is allocated is started) and close, called when the river is closed. The <code>AbstractRiverComponent<\/code> is just a helper that initializes the logger for the river and stores the river name and the river settings on two instance members.<\/p>\n<p>The plugin constructor is annotated with the Guice <code>@Inject<\/code> annotation, so that the object will be injected by Guice with all the needed dependencies. The current <code>SolrRiver<\/code> only depends on <code>Rivername<\/code>, <code>RiverSettings<\/code> and <code>Client<\/code>, where <code>Client<\/code> is a client pointing to the node where the river is allocated.<\/p>\n<pre class=\"brush: java; title: ; notranslate\" title=\"\">\n@Injected\nprotected SolrRiver(RiverName riverName,\n                    RiverSettings riverSettings,\n                    Client client)\n<\/pre>\n<p>The nice thing here is that if you have other dependencies on objects that are already available in elasticsearch and bound through Guice, you can just add them as constructor parameters. For example, to add scripting capabilities to the river (which by the way might be the next feature I\u2019m going to work on) we could simply add a parameter of type ScriptService.<\/p>\n<p>What we basically do in the river constructor is reading the settings used when registering it in order to control the river behaviour. The start method contains what the river really does: it sends a query to a running Solr instance through <a href=\"https:\/\/wiki.apache.org\/solr\/Solrj\">SolrJ<\/a> and indexes the result in elasticsearch. It uses pagination (configurable page size) while querying in order to avoid retrieving too many documents at the same time.<\/p>\n<p>In order to use a river you need to do something really similar to what you do when you index a document:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\ncurl -XPUT localhost:9200\/_river\/solr_river\/_meta -d '{\n    &quot;type&quot; : &quot;solr&quot;\n}'\n<\/pre>\n<p>The JSON above is the very minimum configuration needed to register a river, providing its type. The type must match with the string previously provided when registering the <code>SolrRiverModule<\/code> to the <code>RiversModule<\/code>:<\/p>\n<pre class=\"brush: java; title: ; notranslate\" title=\"\">\nmodule.registerRiver(&quot;solr&quot;, SolrRiverModule.class);\n<\/pre>\n<p>And&#8230;yes In fact you are indexing a document, but on a special index called <code>_river<\/code>, type <code>solr_river<\/code>. You document id is <code>_meta<\/code>. If you provide additional configuration while registering the river like this:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\ncurl -XPUT localhost:9200\/_river\/solr_river\/_meta -d '{\n    &quot;type&quot; : &quot;solr&quot;,\n    &quot;solr&quot; : {\n        &quot;url&quot; : &quot;http:\/\/localhost:8080\/solr\/&quot;,\n        &quot;q&quot; : &quot;*:*&quot;\n    }\n}'\n<\/pre>\n<p>that extra information is stored in the _meta document as well. You can even consider to keep some kind of river state within this special index if you need to. It gets initialized with one shard and one replica.<\/p>\n<p>After you registered a river on a node, every time you start the node it will start again, eventually trying to import again your data. If that\u2019s not what you want, for example because the river is meant to only import data once, like the Solr river, you can just automatically close it after the data import, like this:<\/p>\n<pre class=\"brush: java; title: ; notranslate\" title=\"\">\nclient.admin().indices().prepareDeleteMapping(&quot;_river&quot;).setType(riverName.name()).execute();\n<\/pre>\n<p>Also, it\u2019s good to know that the river is a singleton in the cluster. It\u2019s allocated on a single node which can change in case of failure. You can also control through configuration the allocation of rivers over the cluster.<\/p>\n<h3>Use Bulk API<\/h3>\n<p>A little hint that can save you time: when you index data, you might want to have a look at the <a href=\"http:\/\/www.elasticsearch.org\/guide\/reference\/java-api\/bulk.html\">Bulk API<\/a> to index more documents at the same time and make the indexing process faster. But then you need to control how often you want to send the index request. Every 100 documents? Every 5 minutes? Both? It all depends on your data source&#8230;<\/p>\n<p>And what is the maximum number of concurrent bulks you want to allow? In fact it can happen that your data source is faster than your bulks and you might not want to keep adding concurrent bulks over and over. It\u2019s probably best to wait a little, until some of the bulks are completed to then run the new ones. Since this logic is needed pretty much everywhere when it comes to index data, the elasticsearch team exposed the <code>BulkProcessor<\/code> that helps you a lot:<\/p>\n<pre class=\"brush: java; title: ; notranslate\" title=\"\">\nbulkProcessor = BulkProcessor.builder(client,\n        new BulkProcessor.Listener() {\n\n    @Override\n    public void beforeBulk(long executionId,\n                           BulkRequest request) {\n        logger.info(&quot;Going to execute new bulk composed of {} actions&quot;,\n                request.numberOfActions());\n    }\n\n    @Override\n    public void afterBulk(long executionId,\n                          BulkRequest request,\n                          BulkResponse response) {\n        logger.info(&quot;Executed bulk composed of {} actions&quot;,\n                request.numberOfActions());\n    }\n\n    @Override\n    public void afterBulk(long executionId,\n                          BulkRequest request,\n                          Throwable failure) {\n        logger.warn(&quot;Error executing bulk&quot;, failure);\n    }\n}).setBulkActions(100).setFlushInterval(TimeValue.timeValueMinutes(5))\n        .setConcurrentRequests(10).build();\n<\/pre>\n<p>The code above creates the <code>BulkProcessor<\/code> and configures it to execute the bulk when 100 documents are ready to be indexed and when 5 minutes have passed from the last bulk execution. Also, a maximum number of 10 concurrent bulks will be run.<\/p>\n<p>The following code adds an index request to the previously created <code>BulkProcessor<\/code><\/p>\n<pre class=\"brush: java; title: ; notranslate\" title=\"\">\nbulkProcessor.add(Requests.indexRequest(indexName).type(typeName)\n        .id(id).source(jsonWriter.toString()));\n<\/pre>\n<p>And&#8230;don&#8217;t forget to close the bulk at the end to index any left documents:<\/p>\n<pre class=\"brush: java; title: ; notranslate\" title=\"\">\nbulkProcessor.close();\n<\/pre>\n<p>After reading this article, you should know in detail how to write an elasticsearch river.<\/p>\n<p>Just to be sure, let&#8217;s break it down to a few simple steps:<\/p>\n<ul>\n<li>Write your own plugin class, which implements the <code>Plugin<\/code> interface (extending the <code>AbstractPlugin<\/code> class)<\/li>\n<li>Add the <code>onModule<\/code> method to your plugin class<\/li>\n<li>Write your Guice module for your plugin<\/li>\n<li>Add the es-plugin.properties containing the name of your plugin class to load<\/li>\n<li>Write your river class, which implements <code>River<\/code> and extends <code>AbstractRiverComponent<\/code><\/li>\n<\/ul>\n<p>So then hope this insight helps you to write your own plugin and let me know how you get on&#8230;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Up until now I told you why I think elasticsearch is so cool and how you can use it combined with Spring. It\u2019s now time to get to something a little more technical. For example, once you have a search engine running you need to index data; when it comes to indexing data you usually [&hellip;]<\/p>\n","protected":false},"author":71,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"content-type":"","footnotes":""},"categories":[15,31,65,10],"tags":[35,33,61,26,326],"class_list":["post-6691","post","type-post","status-publish","format-standard","hentry","category-enterprise-search","category-java","category-big_data_search","category-development","tag-lucene","tag-solr","tag-elasticsearch","tag-plugin","tag-river"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.4 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>How to write an elasticsearch river plugin - Trifork Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/trifork.nl\/blog\/how-to-write-an-elasticsearch-river-plugin\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How to write an elasticsearch river plugin - Trifork Blog\" \/>\n<meta property=\"og:description\" content=\"Up until now I told you why I think elasticsearch is so cool and how you can use it combined with Spring. It\u2019s now time to get to something a little more technical. For example, once you have a search engine running you need to index data; when it comes to indexing data you usually [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/trifork.nl\/blog\/how-to-write-an-elasticsearch-river-plugin\/\" \/>\n<meta property=\"og:site_name\" content=\"Trifork Blog\" \/>\n<meta property=\"article:published_time\" content=\"2013-01-10T14:29:27+00:00\" \/>\n<meta property=\"og:image\" content=\"http:\/\/www.pilato.fr\/rssriver\/images\/logo-icon.png\" \/>\n<meta name=\"author\" content=\"Luca Cavanna\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Luca Cavanna\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"11 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/trifork.nl\/blog\/how-to-write-an-elasticsearch-river-plugin\/\",\"url\":\"https:\/\/trifork.nl\/blog\/how-to-write-an-elasticsearch-river-plugin\/\",\"name\":\"How to write an elasticsearch river plugin - Trifork Blog\",\"isPartOf\":{\"@id\":\"https:\/\/trifork.nl\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/trifork.nl\/blog\/how-to-write-an-elasticsearch-river-plugin\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/trifork.nl\/blog\/how-to-write-an-elasticsearch-river-plugin\/#primaryimage\"},\"thumbnailUrl\":\"http:\/\/www.pilato.fr\/rssriver\/images\/logo-icon.png\",\"datePublished\":\"2013-01-10T14:29:27+00:00\",\"author\":{\"@id\":\"https:\/\/trifork.nl\/blog\/#\/schema\/person\/d9aa0a29580038af46b7a223de3e4fd8\"},\"breadcrumb\":{\"@id\":\"https:\/\/trifork.nl\/blog\/how-to-write-an-elasticsearch-river-plugin\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/trifork.nl\/blog\/how-to-write-an-elasticsearch-river-plugin\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/trifork.nl\/blog\/how-to-write-an-elasticsearch-river-plugin\/#primaryimage\",\"url\":\"http:\/\/www.pilato.fr\/rssriver\/images\/logo-icon.png\",\"contentUrl\":\"http:\/\/www.pilato.fr\/rssriver\/images\/logo-icon.png\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/trifork.nl\/blog\/how-to-write-an-elasticsearch-river-plugin\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/trifork.nl\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"How to write an elasticsearch river plugin\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/trifork.nl\/blog\/#website\",\"url\":\"https:\/\/trifork.nl\/blog\/\",\"name\":\"Trifork Blog\",\"description\":\"Keep updated on the technical solutions Trifork is working on!\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/trifork.nl\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/trifork.nl\/blog\/#\/schema\/person\/d9aa0a29580038af46b7a223de3e4fd8\",\"name\":\"Luca Cavanna\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/trifork.nl\/blog\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/56beb100625f636499231471d8c27425?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/56beb100625f636499231471d8c27425?s=96&d=mm&r=g\",\"caption\":\"Luca Cavanna\"},\"url\":\"https:\/\/trifork.nl\/blog\/author\/luca\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"How to write an elasticsearch river plugin - Trifork Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/trifork.nl\/blog\/how-to-write-an-elasticsearch-river-plugin\/","og_locale":"en_US","og_type":"article","og_title":"How to write an elasticsearch river plugin - Trifork Blog","og_description":"Up until now I told you why I think elasticsearch is so cool and how you can use it combined with Spring. It\u2019s now time to get to something a little more technical. For example, once you have a search engine running you need to index data; when it comes to indexing data you usually [&hellip;]","og_url":"https:\/\/trifork.nl\/blog\/how-to-write-an-elasticsearch-river-plugin\/","og_site_name":"Trifork Blog","article_published_time":"2013-01-10T14:29:27+00:00","og_image":[{"url":"http:\/\/www.pilato.fr\/rssriver\/images\/logo-icon.png","type":"","width":"","height":""}],"author":"Luca Cavanna","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Luca Cavanna","Est. reading time":"11 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/trifork.nl\/blog\/how-to-write-an-elasticsearch-river-plugin\/","url":"https:\/\/trifork.nl\/blog\/how-to-write-an-elasticsearch-river-plugin\/","name":"How to write an elasticsearch river plugin - Trifork Blog","isPartOf":{"@id":"https:\/\/trifork.nl\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/trifork.nl\/blog\/how-to-write-an-elasticsearch-river-plugin\/#primaryimage"},"image":{"@id":"https:\/\/trifork.nl\/blog\/how-to-write-an-elasticsearch-river-plugin\/#primaryimage"},"thumbnailUrl":"http:\/\/www.pilato.fr\/rssriver\/images\/logo-icon.png","datePublished":"2013-01-10T14:29:27+00:00","author":{"@id":"https:\/\/trifork.nl\/blog\/#\/schema\/person\/d9aa0a29580038af46b7a223de3e4fd8"},"breadcrumb":{"@id":"https:\/\/trifork.nl\/blog\/how-to-write-an-elasticsearch-river-plugin\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/trifork.nl\/blog\/how-to-write-an-elasticsearch-river-plugin\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/trifork.nl\/blog\/how-to-write-an-elasticsearch-river-plugin\/#primaryimage","url":"http:\/\/www.pilato.fr\/rssriver\/images\/logo-icon.png","contentUrl":"http:\/\/www.pilato.fr\/rssriver\/images\/logo-icon.png"},{"@type":"BreadcrumbList","@id":"https:\/\/trifork.nl\/blog\/how-to-write-an-elasticsearch-river-plugin\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/trifork.nl\/blog\/"},{"@type":"ListItem","position":2,"name":"How to write an elasticsearch river plugin"}]},{"@type":"WebSite","@id":"https:\/\/trifork.nl\/blog\/#website","url":"https:\/\/trifork.nl\/blog\/","name":"Trifork Blog","description":"Keep updated on the technical solutions Trifork is working on!","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/trifork.nl\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/trifork.nl\/blog\/#\/schema\/person\/d9aa0a29580038af46b7a223de3e4fd8","name":"Luca Cavanna","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/trifork.nl\/blog\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/56beb100625f636499231471d8c27425?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/56beb100625f636499231471d8c27425?s=96&d=mm&r=g","caption":"Luca Cavanna"},"url":"https:\/\/trifork.nl\/blog\/author\/luca\/"}]}},"_links":{"self":[{"href":"https:\/\/trifork.nl\/blog\/wp-json\/wp\/v2\/posts\/6691","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/trifork.nl\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/trifork.nl\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/trifork.nl\/blog\/wp-json\/wp\/v2\/users\/71"}],"replies":[{"embeddable":true,"href":"https:\/\/trifork.nl\/blog\/wp-json\/wp\/v2\/comments?post=6691"}],"version-history":[{"count":0,"href":"https:\/\/trifork.nl\/blog\/wp-json\/wp\/v2\/posts\/6691\/revisions"}],"wp:attachment":[{"href":"https:\/\/trifork.nl\/blog\/wp-json\/wp\/v2\/media?parent=6691"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/trifork.nl\/blog\/wp-json\/wp\/v2\/categories?post=6691"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/trifork.nl\/blog\/wp-json\/wp\/v2\/tags?post=6691"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}