{"id":645047,"date":"2013-03-04T08:00:43","date_gmt":"2013-03-04T13:00:43","guid":{"rendered":"http:\/\/gigaom.com\/?p=613362"},"modified":"2013-03-04T08:00:43","modified_gmt":"2013-03-04T13:00:43","slug":"the-history-of-hadoop-from-4-nodes-to-the-future-of-data","status":"publish","type":"post","link":"https:\/\/mereja.media\/index\/645047","title":{"rendered":"The history of Hadoop: From 4 nodes to the future of data"},"content":{"rendered":"<p>Depending on how one defines its birth, <a href=\"http:\/\/hadoop.apache.org\/\">Hadoop<\/a> is now 10 years old. In that decade, Hadoop has gone from being the hopeful answer to Yahoo\u2019s search-engine woes to a general-purpose computing platform that\u2019s poised to be the foundation for the next generation of data-based applications.<\/p>\n<p>Alone, Hadoop is a software market that IDC <a href=\"http:\/\/gigaom.com\/2012\/05\/07\/all-aboard-the-hadoop-money-train\/\">predicts will be worth $813 million<\/a> in 2016 (although that number is likely very low), but it\u2019s also driving a big data market the research firm <a href=\"http:\/\/gigaom.com\/2013\/01\/08\/idc-says-big-data-will-be-24b-market-in-2016-i-say-its-bigger\/\">predicts will hit more than $23 billion<\/a> by 2016. Since Cloudera launched in 2008, Hadoop has spawned dozens of startups and <a href=\"http:\/\/gigaom.com\/2012\/11\/09\/a-few-stats-rumors-and-stories-on-on-hadoops-rapid-growth\/\">spurred hundreds of millions in venture capital investment<\/a> since 2008.<\/p>\n<p>In this four-part series, we\u2019ll explain everything anyone concerned with information technology needs to know about Hadoop. Part I is the history of Hadoop from the people who willed it into existence and took it mainstream. Part II is more graphic; a map of the now-large and complex ecosystem of companies selling Hadoop products. Part III is a look into the future of Hadoop that should serve as an opening salvo for much of the discussion <a href=\"http:\/\/event.gigaom.com\/structuredata\/?utm_source=data&#38;utm_medium=editorial&#038;%2338;utm_campaign=intext&#038;%2338;utm_term=613362+the-history-of-hadoop-from-4-nodes-to-the-future-of-data&#038;%2338;utm_content=dharrisstructure\">at our Structure: Data conference<\/a> March 20-21 in New York. Finally, Part IV will highlight some the best Hadoop applications and seminal moments in Hadoop history, as reported by GigaOM over the years.<\/p>\n<p> <iframe loading=\"lazy\" width=\"100%\" height=\"166\" scrolling=\"no\" frameborder=\"no\" src=\"http:\/\/w.soundcloud.com\/player?url=http%3A%2F%2Fapi.soundcloud.com%2Ftracks%2F80972101%253Fsecret_token%253Ds-RbbVK\"><\/iframe> <\/p>\n<h2 id=\"wanted-a-better-search-engine\">Wanted: A better search engine<\/h2>\n<p>Almost everywhere you go online now, Hadoop is there in some capacity. <a href=\"http:\/\/gigaom.com\/2012\/06\/13\/how-facebook-keeps-100-petabytes-of-hadoop-data-online\/\">Facebook<\/a>, <a href=\"http:\/\/gigaom.com\/2012\/01\/31\/under-the-covers-of-ebays-big-data-operation\/\">eBay<\/a>, <a href=\"http:\/\/gigaom.com\/2011\/11\/02\/how-etsy-handcrafted-a-big-data-strategy\/\">Etsy<\/a>, <a href=\"http:\/\/gigaom.com\/2012\/12\/02\/pinterest-flipboard-and-yelp-tell-how-to-save-big-bucks-in-the-cloud\/\">Yelp<\/a>\u00a0, <a href=\"http:\/\/gigaom.com\/2012\/03\/07\/how-twitter-is-doing-its-part-to-democratize-big-data\/\">Twitter<\/a>, <a href=\"http:\/\/gigaom.com\/2012\/09\/17\/5-ideas-to-help-everyone-make-the-most-of-big-data\/\">Salesforce.com<\/a> \u2014 you name a popular web site or service, and the chances are it\u2019s using Hadoop to analyze the mountains of data it\u2019s generating about user behavior and even its own operations. Even in the physical world, forward-thinking companies in fields ranging from <a href=\"http:\/\/gigaom.com\/2012\/09\/16\/how-disney-built-a-big-data-platform-on-a-startup-budget\/\">entertainment<\/a> to <a href=\"http:\/\/gigaom.com\/2012\/10\/11\/the-rent-is-too-damn-high-but-big-data-means-the-power-bill-isnt\/\">energy management<\/a> to <a href=\"http:\/\/gigaom.com\/2012\/04\/17\/satellite-imagery-and-hadoop-mean-70m-for-skybox\/\">satellite imagery<\/a> are using Hadoop to analyze the unique types of data they\u2019re collecting and generating.<\/p>\n<p>Everyone involved with information technology at least knows what it is. Hadoop even serves as the foundation for new-school <a href=\"http:\/\/incubator.apache.org\/giraph\/\">graph<\/a> and <a href=\"http:\/\/hbase.apache.org\/\">NoSQL databases<\/a>, as well as <a href=\"http:\/\/gigaom.com\/2012\/07\/24\/how-one-startup-wants-to-inject-hadoop-into-your-sql\/\">bigger, badder versions of relational databases<\/a> that have been around for decades.<\/p>\n<p>But it wasn\u2019t always this way, and today\u2019s uses are a long way off from the original vision of what Hadoop could be.<\/p>\n<div id=\"attachment_616209\" class=\"wp-caption alignleft\" style=\"width: 210px\"><img decoding=\"async\" alt=\"Doug Cutting\" src=\"http:\/\/gigaom2.files.wordpress.com\/2013\/03\/cutting.jpg?w=708\" class=\"size-full wp-image-616209\"><\/p>\n<p class=\"wp-caption-text\">Doug Cutting<\/p>\n<\/div>\n<p>When the seeds of Hadoop were first planted in 2002, the world just wanted a better open-source search engine. So then-Internet Archive search director Doug Cutting and University of Washington graduate student Mike Cafarella set out to build it. They called their project <a href=\"http:\/\/nutch.apache.org\/\">Nutch<\/a> and it was designed with that era\u2019s web in mind.<\/p>\n<p>Looking back on it today, early iterations of Nutch were kind of laughable. About a year into their work on it, Cutting and Cafarella thought things were going pretty well because Nutch was already able to crawl and index hundreds of millions of pages. \u201cAt the time, when we started, we were sort of thinking that a web search engine was around a billion pages,\u201d Cutting explained to me, \u201cso we were getting up there.\u201d<\/p>\n<p>There are now about 700 million web sites and, <a href=\"http:\/\/articles.cnn.com\/2011-09-12\/tech\/web.index_1_internet-neurons-human-brain?_s=PM%3ATECH\">according to Wired\u2019s Kevin Kelly<\/a>, well over a trillion web pages.<\/p>\n<p>But getting Nutch to work wasn\u2019t easy. It could only run across a handful of machines, and someone had to watch it around the clock to make sure it didn\u2019t fall down.<\/p>\n<div id=\"attachment_616210\" class=\"wp-caption alignright\" style=\"width: 251px\"><img decoding=\"async\" alt=\"Mike Cafarella\" src=\"http:\/\/gigaom2.files.wordpress.com\/2013\/03\/cafarella241.jpg?w=708\" class=\"size-full wp-image-616210\"><\/p>\n<p class=\"wp-caption-text\">Mike Cafarella<\/p>\n<\/div>\n<p>\u201cI remember working on it for several months, being quite proud of what we had been doing, and then the Google File System paper came out and I realized \u2018Oh, that\u2019s a much better way of doing it. We should do it that way,\u2019\u201d reminisced Cafarella. \u201cThen, by the time we had a first working version, the MapReduce paper came out and that seemed like a pretty good idea, too.\u201d<\/p>\n<p>Google released the <a href=\"http:\/\/research.google.com\/archive\/gfs.html\">Google File System paper<\/a> in October 2003 and the <a href=\"http:\/\/research.google.com\/archive\/mapreduce.html\">MapReduce paper<\/a> in December 2004. The latter would prove especially revelatory to the two engineers building Nutch.<\/p>\n<p>\u201cWhat they spent a lot of time doing was generalizing this into a framework that automated all these steps that we were doing manually,\u201d Cutting explained.<\/p>\n<p> <iframe loading=\"lazy\" width=\"100%\" height=\"166\" scrolling=\"no\" frameborder=\"no\" src=\"http:\/\/w.soundcloud.com\/player?url=http%3A%2F%2Fapi.soundcloud.com%2Ftracks%2F80972106%253Fsecret_token%253Ds-gmRg8\"><\/iframe> <\/p>\n<p>Raymie Stata, founder and CEO of Hadoop startup <a href=\"http:\/\/verticloud.com\/\">VertiCloud<\/a> (and former Yahoo CTO), calls MapReduce \u201ca fantastic kind of abstraction\u201d over the distributed computing methods and algorithms most search companies were already using:<\/p>\n<blockquote id=\"quote-everyone-had-somethi\">\n<p>\u201cEveryone had something that pretty much was like MapReduce because we were all solving the same problems. We were trying to handle literally billions of web pages on machines that are probably, if you go back and check, epsilon more powerful than today\u2019s cell phones. \u2026 So there was no option but to latch hundreds to thousands of machines together to build the index. So it was out of desperation that MapReduce was invented.\u201d<\/p>\n<\/blockquote>\n<div id=\"attachment_616201\" class=\"wp-caption aligncenter\" style=\"width: 718px\"><img loading=\"lazy\" decoding=\"async\" alt=\"MapReduce diagram, from the Google paper\" src=\"http:\/\/gigaom2.files.wordpress.com\/2013\/03\/index-auto-0008-0001.gif?w=708&#038;h=489\" width=\"708\" height=\"489\" class=\"size-large wp-image-616201\"><\/p>\n<p class=\"wp-caption-text\">Parallel processing in MapReduce, from the Google paper<\/p>\n<\/div>\n<p>Over the course of a few months, Cutting and Cafarella built up the underlying file systems and processing framework that would become Hadoop (in Java, notably, whereas Google\u2019s MapReduce used C++) and ported Nutch on top of it. Now, instead of having one guy watch a handful of machines all day long, Cutting explained, they\u00a0could just set it running on between 20 and 40 machines that he and Cafarella were able to scrape together from their employers.<\/p>\n<p> <iframe loading=\"lazy\" width=\"100%\" height=\"166\" scrolling=\"no\" frameborder=\"no\" src=\"http:\/\/w.soundcloud.com\/player?url=http%3A%2F%2Fapi.soundcloud.com%2Ftracks%2F80972114%253Fsecret_token%253Ds-yCIvx\"><\/iframe> <\/p>\n<h2 id=\"bringing-hadoop-to-life-but-no\">Bringing Hadoop to life (but not in search)<\/h2>\n<p>Anyone vaguely familiar with the history of Hadoop can guess what happens next: In 2006, Cutting went to work with Yahoo, which was equally impressed by the Google File System and MapReduce papers and wanted to build open source technologies based on them. They spun out the storage and processing parts of Nutch to form\u00a0Hadoop (named after Cutting\u2019s son\u2019s stuffed elephant) as an open-source Apache Software Foundation project and the Nutch web crawler remained its own separate project.<\/p>\n<p>\u201cThis seem like a perfect fit because I was looking for more people to work on it, and people who had thousands of computers to run it on,\u201d Cutting said.<\/p>\n<p>Cafarella, now <a href=\"http:\/\/web.eecs.umich.edu\/~michjc\/bio.html\">an associate professor at the University of Michigan<\/a>, opted to forgo a career in corporate IT and focus on his education. He\u2019s happy as a professor \u2014 and currently working on a Hadoop-complementary project called <a href=\"http:\/\/cloudera.github.com\/RecordBreaker\/\">RecordBreaker<\/a> \u2014 but, he joked, \u201cMy dad calls me the Pete Best of the big data world.\u201d<\/p>\n<p>Ironically, though, the 2006-era Hadoop was nowhere near ready to handle production search workloads at webscale \u2014 the very task it was created to do. \u201cThe thing you gotta remember,\u201d explained Hortonworks Co-founder and CEO Eric Baldeschwieler (who was previously VP of Hadoop software development at Yahoo), \u201cis at the time we started adopting it, the aspiration was definitely to rebuild Yahoo\u2019s web search infrastructure, but Hadoop only really worked on 5 to 20 nodes at that point, and it wasn\u2019t very performant, either.\u201d<\/p>\n<div id=\"attachment_616234\" class=\"wp-caption aligncenter\" style=\"width: 718px\"><a href=\"http:\/\/www.flickr.com\/photos\/yodelanecdotal\/4746014041\/sizes\/l\/in\/photostream\/\"><img loading=\"lazy\" decoding=\"async\" alt=\"Baldeschwieler at Hadoop Summit 2010. Source: Yodel Anectdotal\" src=\"http:\/\/gigaom2.files.wordpress.com\/2013\/03\/4746014041_7a80b97c2e_b.jpg?w=708&#038;h=472\" width=\"708\" height=\"472\" class=\"size-large wp-image-616234\"><\/a><\/p>\n<p class=\"wp-caption-text\">Baldeschwieler at Hadoop Summit 2010. Source: Yodel Anectdotal<\/p>\n<\/div>\n<p>Stata recalls a \u201cslow march\u201d of horizontal scalability, growing Hadoop\u2019s capabilities from the single digits of nodes into the tens of nodes and ultimately into the thousands. \u201cIt was just an ongoing slog \u2026 every factor of 2 or 1.5 even was serious engineering work,\u201d he said. But Yahoo was determined to scale Hadoop as far as it needed to go, and it continued investing heavy resources into the project.<\/p>\n<p>It actually took years for Yahoo to moves its web index onto Hadoop, but in the meantime the company made what would be a fortuitous decision to set up what it called a \u201cresearch grid\u201d for the company\u2019s data scientists, to use today\u2019s parlance. It started with dozens of nodes and ultimately grew to hundreds as they added more and more data and Hadoop\u2019s technology matured. What began life as a proof of concept fast became a whole lot more.<\/p>\n<p>\u201cThis very quickly kind of exploded and became our core mission,\u201d Baldeschwieler said, \u201cbecause what happened is the data scientists not only got interesting research results \u2014 what we had anticipated \u2014 but they also prototyped new applications and demonstrated that those applications could substantially improve Yahoo\u2019s search relevance or Yahoo\u2019s advertising revenue.\u201d<\/p>\n<p>Shortly thereafter, Yahoo began rolling out Hadoop to power analytics for various production applications. Eventually, Stata explained, Hadoop had proven so effective that Yahoo merged its search and advertising into one unit so that Yahoo\u2019s bread-and-butter sponsored search business could benefit from the new technology.<\/p>\n<div id=\"attachment_616207\" class=\"wp-caption aligncenter\" style=\"width: 718px\"><a href=\"http:\/\/www.flickr.com\/photos\/joeywan\/2467450286\/\"><img loading=\"lazy\" decoding=\"async\" alt=\"Cutting (center) flanked by Baldeschwieler and Om Malik at GigaOM's Hadoop Meetup in 2008.\" src=\"http:\/\/gigaom2.files.wordpress.com\/2013\/03\/2467450286_db547ef9ef_b.jpg?w=708&#038;h=365\" width=\"708\" height=\"365\" class=\"size-large wp-image-616207\"><\/a><\/p>\n<p class=\"wp-caption-text\">Cutting (center) flanked by Baldeschwieler and Om Malik at GigaOM\u2019s Hadoop Meetup in 2008.<\/p>\n<\/div>\n<p>And <a href=\"http:\/\/gigaom.com\/2010\/06\/29\/yahoo-secures-and-tames-hadoop-with-new-tools\/\">that\u2019s exactly what happened<\/a>, because although data scientists didn\u2019t need things like service-level agreements, business leaders did. So, Stata said, Yahoo implemented some scheduling changes within Hadoop. And although data scientists didn\u2019t need security, Securities and Exchange Commission requirements mandated a certain level of security when Yahoo moved its sponsored search data onto it.<\/p>\n<p>\u201cThat drove a certain level of maturity,\u201d Stata said. \u201c\u2026 We ran all the money in Yahoo through it, eventually.\u201d<\/p>\n<p>The transformation into Hadoop being \u201cbehind every click\u201d (or every batch process, technically) at Yahoo was pretty much complete by 2008, Baldeschwieler said. That meant doing everything from these line-of-business applications to spam filtering to personalized display decisions on the Yahoo front page. By the time Yahoo spun out Hortonworks into a separate, Hadoop-focused software company in 2011, Yahoo\u2019s Hadoop infrastructure consisted of 42,000 nodes and hundreds of petabytes of storage.<\/p>\n<p> <iframe loading=\"lazy\" width=\"100%\" height=\"166\" scrolling=\"no\" frameborder=\"no\" src=\"http:\/\/w.soundcloud.com\/player?url=http%3A%2F%2Fapi.soundcloud.com%2Ftracks%2F80972099%253Fsecret_token%253Ds-g7Wo5\"><\/iframe> <\/p>\n<h2 id=\"from-the-classroom\">From the classroom \u2026<\/h2>\n<p>However, although Yahoo was responsible for the vast majority of development during its formative years,\u00a0Hadoop didn\u2019t exist in a bubble inside Yahoo\u2019s headquarters. It was a full-on Apache project that attracted users and contributors from around the world. Guys like Tom White, a Welshman who actually wrote O\u2019Reilly Media\u2019s book <i>Hadoop: The Definitive Guide<\/i> despite being what Cutting describes\u00a0as a guy who just liked software and played with Hadoop at night.<\/p>\n<p>Up in Seattle in 2006, a young Google engineer named Christophe Bisciglia was using his 20 percent time to teach a computer science course at the University of Washington. Google wanted to hire new employees with experience working on webscale data, but its MapReduce code was proprietary, so it bought a rack of servers and used Hadoop as a proxy.<\/p>\n<p><a href=\"http:\/\/gigaom.com\/2013\/03\/04\/the-history-of-hadoop-from-4-nodes-to-the-future-of-data\/2\/\">Go to page 2 (of 2) on GigaOM\u00a0.<\/a><\/p>\n<p> <img loading=\"lazy\" decoding=\"async\" alt=\"\" border=\"0\" src=\"http:\/\/stats.wordpress.com\/b.gif?host=gigaom.com&#038;blog=14960843&#038;%23038;post=613362&#038;%23038;subd=gigaom2&#038;%23038;ref=&#038;%23038;feed=1\" width=\"1\" height=\"1\" \/><\/p>\n<p><a href=\"http:\/\/pubads.g.doubleclick.net\/gampad\/jump?iu=\/1008864\/GigaOM_RSS_300x250&#038;sz=300x250&#038;%23038;c=541771\"><img decoding=\"async\" src=\"http:\/\/pubads.g.doubleclick.net\/gampad\/ad?iu=\/1008864\/GigaOM_RSS_300x250&#038;sz=300x250&#038;%23038;c=541771\" \/><\/a><\/p>\n<p><strong>Related research and analysis from GigaOM Pro:<\/strong><br \/>Subscriber content. <a href=\"http:\/\/pro.gigaom.com\/?utm_source=data&#038;utm_medium=editorial&#038;utm_campaign=auto3&#038;utm_term=613362+the-history-of-hadoop-from-4-nodes-to-the-future-of-data&#038;utm_content=dharrisstructure\">Sign up for a free trial<\/a>.<\/p>\n<ul>\n<li><a href=\"http:\/\/pro.gigaom.com\/2012\/03\/a-near-term-outlook-for-big-data\/?utm_source=data&#038;utm_medium=editorial&#038;utm_campaign=auto3&#038;utm_term=613362+the-history-of-hadoop-from-4-nodes-to-the-future-of-data&#038;utm_content=dharrisstructure\">A near-term outlook for big data<\/a><\/li>\n<li><a href=\"http:\/\/pro.gigaom.com\/2011\/03\/defining-hadoop-the-players-technologies-and-challenges-of-2011\/?utm_source=data&#038;utm_medium=editorial&#038;utm_campaign=auto3&#038;utm_term=613362+the-history-of-hadoop-from-4-nodes-to-the-future-of-data&#038;utm_content=dharrisstructure\">Defining Hadoop: the Players, Technologies and Challenges of 2011<\/a><\/li>\n<li><a href=\"http:\/\/pro.gigaom.com\/2012\/11\/real-%C2%ADtime-query-for-hadoop-democratizes-access-to-big-data-analytics\/?utm_source=data&#038;utm_medium=editorial&#038;utm_campaign=auto3&#038;utm_term=613362+the-history-of-hadoop-from-4-nodes-to-the-future-of-data&#038;utm_content=dharrisstructure\">Real-\u00adtime query for Hadoop democratizes access\u00a0to big data analytics<\/a><\/li>\n<\/ul>\n<p><img width='1' height='1' src='http:\/\/gigaom.feedsportal.com\/c\/34996\/f\/646446\/s\/292f0b16\/mf.gif' border='0'\/><\/p>\n<div class='mf-viral'>\n<table border='0'>\n<tr>\n<td valign='middle'><a href=\"http:\/\/share.feedsportal.com\/viral\/sendEmail.cfm?lang=en&#038;title=The+history+of+Hadoop%3A+From+4+nodes+to+the+future+of+data&#038;link=http%3A%2F%2Fgigaom.com%2F2013%2F03%2F04%2Fthe-history-of-hadoop-from-4-nodes-to-the-future-of-data%2F\" ><img decoding=\"async\" src=\"http:\/\/res3.feedsportal.com\/images\/emailthis2.gif\" border=\"0\" \/><\/a><\/td>\n<td valign='middle'><a href=\"http:\/\/res.feedsportal.com\/viral\/bookmark.cfm?title=The+history+of+Hadoop%3A+From+4+nodes+to+the+future+of+data&#038;link=http%3A%2F%2Fgigaom.com%2F2013%2F03%2F04%2Fthe-history-of-hadoop-from-4-nodes-to-the-future-of-data%2F\" ><img decoding=\"async\" src=\"http:\/\/res3.feedsportal.com\/images\/bookmark.gif\" border=\"0\" \/><\/a><\/td>\n<\/tr>\n<\/table>\n<\/div>\n<p><a href=\"http:\/\/da.feedsportal.com\/r\/159490484487\/u\/49\/f\/646446\/c\/34996\/s\/292f0b16\/a2.htm\"><img decoding=\"async\" src=\"http:\/\/da.feedsportal.com\/r\/159490484487\/u\/49\/f\/646446\/c\/34996\/s\/292f0b16\/a2.img\" border=\"0\"\/><\/a><img loading=\"lazy\" decoding=\"async\" width=\"1\" height=\"1\" src=\"http:\/\/pi.feedsportal.com\/r\/159490484487\/u\/49\/f\/646446\/c\/34996\/s\/292f0b16\/a2t.img\" border=\"0\"\/><\/p>\n<div class=\"feedflare\">\n<a href=\"http:\/\/feeds.feedburner.com\/~ff\/OmMalik?a=_qdse-3Pwek:BcjU_JZ79xM:yIl2AUoC8zA\"><img decoding=\"async\" src=\"http:\/\/feeds.feedburner.com\/~ff\/OmMalik?d=yIl2AUoC8zA\" border=\"0\"><\/img><\/a>\n<\/div>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/feeds.feedburner.com\/~r\/OmMalik\/~4\/_qdse-3Pwek\" height=\"1\" width=\"1\"\/><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Depending on how one defines its birth, Hadoop is now 10 years old. In that decade, Hadoop has gone from being the hopeful answer to Yahoo\u2019s search-engine woes to a general-purpose computing platform that\u2019s poised to be the foundation for the next generation of data-based applications. Alone, Hadoop is a software market that IDC predicts [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7],"tags":[],"class_list":["post-645047","post","type-post","status-publish","format-standard","hentry","category-news"],"_links":{"self":[{"href":"https:\/\/mereja.media\/index\/wp-json\/wp\/v2\/posts\/645047","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/mereja.media\/index\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mereja.media\/index\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mereja.media\/index\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mereja.media\/index\/wp-json\/wp\/v2\/comments?post=645047"}],"version-history":[{"count":0,"href":"https:\/\/mereja.media\/index\/wp-json\/wp\/v2\/posts\/645047\/revisions"}],"wp:attachment":[{"href":"https:\/\/mereja.media\/index\/wp-json\/wp\/v2\/media?parent=645047"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mereja.media\/index\/wp-json\/wp\/v2\/categories?post=645047"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mereja.media\/index\/wp-json\/wp\/v2\/tags?post=645047"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}