Bonsai Blog Posts
Enter a key term, phrase, name or location to get a selection of only relevant news from all RSS channels.
Enter a domain's or RSS channel's URL to read their news in a convenient way and get a complete analytics on this RSS feed.
Elasticsearch 1.0 was released in February, bringing with it a number of performance and stability improvements, as well as useful new features. We released...
Big congratulations to the Elasticsearch team, which shipped its version 1.0 yesterday! This is a major milestone that represents thousands of hours of...
January 2013 January brought a fresh new year, with new projects, renewed ambitions, and 100% availability from our services. Our Heroku add-on experienced...
'Twas the month before Christmas / and all through the cloud / not a service was stirring / our clusters ran proud. December was a fairly quiet, hands...
While our Pingdom checks report a bit differently due to a config change in our health-check endpoint, our uptime for November 2013 was 100%. During November...
Summary On Tuesday, October 29, we experienced an outage on our primary Elasticsearch cluster in our US East region. Up to 20% of our customers in this...
After a smooth upgrade, all of our customers are now running on the latest and greatest Elasticsearch 0.90.0! Elasticsearch 0.90 was released just last...
Good news, everyone! Elasticsearch 0.90.0 was released today. Version 0.90 brings some excellent improvements for Elasticsearch users everywhere, and...
March was a hard month for our availability and uptime. We suffered from two major incidents in March, between them causing multiple hours of degraded...
Yesterday, Elasticsearch announced the release of version 0.20.6, which includes several critical bug fixes related to Elasticsearch and Lucene. Today...
On Monday, March 5, 2013 we experienced a substantial outage of our production Elasticsearch cluster. We are profoundly sorry for the outage and its affect...
Unfortunately Bonsai Blog Posts has no news yet.
But you may check out related channels listed below.
[...] would have returned with a 404 error. Bringing in the big guns With one node offline, and index creation still failing intermittently, we enlisted the help of Elasticsearch creator Shay Banon to [...]
[...] found and fixed a small regression in our routing layer, which caused some complications with index creation over the past 24 hours. We felt the issue deserved a post-mortem writeup with more details. [...]
[...] requests would fail with a 502 error, only to apparently succeed a few minutes later, while index creation requests failed entirely. Based on the logs, we saw that servers within the Elasticsearch [...]
[...] release of 0.90: Probably the biggest improvement that our users will notice is much better memory usage when loading fielddata for faceting or sorting on a field. Fielddata uses less memory and [...]
[...] the 25th, when an Elasticsearch node was restarted abruptly to recover from unusual load and memory usage, without having been removed from our load balancers. During that time, a fraction of requests [...]
[...] returned to normal operation, reporting a green state, with normal load levels, CPU and memory usage across all nodes. Total duration of this outage was approximately 20 minutes of hard downtime [...]
[...] Summary On Tuesday, October 29, we experienced an outage on our primary Elasticsearch cluster in our US East region. Up to 20% of our customers in this cluster experienced a total [...]
[...] On Monday, March 5, 2013 we experienced a substantial outage of our production Elasticsearch cluster. We are profoundly sorry for the outage and its affect on our customers and the users and [...]
[...] of degraded performance, and four full cluster restarts, contributing nearly 20 minutes of hard downtime each. Our March uptime, as reported by Pingdom health checks, was 99.73%. That’s the [...]
[...] , Inc. While this upgrade did require a full cluster restart, we were able to limit our hard downtime to about three minutes. All index primary shards were back online in less than ten minutes, [...]
[...] usage across all nodes. Total duration of this outage was approximately 20 minutes of hard downtime for an average index, due to cluster restarts. In addition to that, approximately six [...]
[...] we did experience a small blip: about five seconds of partial 503 responses during a minor system maintenance deploy. However, the issue was automatically corrected, and otherwise our service ran [...]
[...] we did experience a small blip: about five seconds of partial 503 responses during a minor system maintenance deploy. However, the issue was automatically corrected, and otherwise our service ran [...]
[...] for November 2013 was 100%. During November, we performed some notable upgrades to our routing proxy layer, as well as minor release upgrades to Elasticsearch in all our clusters and regions. [...]
[...] for November 2013 was 100%. During November, we performed some notable upgrades to our routing proxy layer, as well as minor release upgrades to Elasticsearch in all our clusters and regions. [...]
[...] not accounted for in our uptime reports. On the 21st, we had a few hours during which index creation requests were failing due to a regression deployed with our multi-index syntax support. Another [...]
[...] would fail with a 502 error, only to apparently succeed a few minutes later, while index creation requests failed entirely. Based on the logs, we saw that servers within the Elasticsearch cluster [...]
[...] recording of last week's webinar by Elasticsearch, Inc. While this upgrade did require a full cluster restart, we were able to limit our hard downtime to about three minutes. All index primary shards [...]
[...] binary itself to affect the cluster settings you need. Expect some failovers to require a full cluster restart. For example, we were unable to set index.recovery.initial_shards via the api, so we were [...]
[...] deployed with our multi-index syntax support. Another was on Monday the 25th, when an Elasticsearch node was restarted abruptly to recover from unusual load and memory usage, without having been [...]
[...] would have returned with a 404 error. Bringing in the big guns With one node offline, and index creation still failing intermittently, we enlisted the help of Elasticsearch creator Shay Banon to [...]
[...] found and fixed a small regression in our routing layer, which caused some complications with index creation over the past 24 hours. We felt the issue deserved a post-mortem writeup with more details. [...]
[...] requests would fail with a 502 error, only to apparently succeed a few minutes later, while index creation requests failed entirely. Based on the logs, we saw that servers within the Elasticsearch [...]
Related channels
-
Latest Blog Post
Latest Blog Post
-
Hotukdeals
Deal Anarchy From The Masses
-
StarDirt: The Best Celebrity Blog Posts
All The Best Celebrity Blog Posts From Arond The Web.
-
The Atlantic
The Atlantic covers breaking news, analysis, opinion around politics, business, culture, international, science, technol...
-
Blog Post Promotion
Learn How to Amazingly Promote Blog Posts