Is Dogecoin Dead? : rdogecoin - Reddit Dogecoin will never die SomeoneElseAlso • Additional comment actions I purchased a few things last week, in one case after several hours my bitcoin still hadn't been verified, despite a fee larger than the client suggested. Had to sort it out in an email exchange. Down 53% From Its All-Time High, Bitcoin Usage Is Still Apr 12, · While Bitcoin might still be down, its number of users continues to rise. Transactions in are at some of their highest levels in more than two years. Motley Fool Issues Rare “All In” Buy fernando ribeiro oracle re.47 chainlink startups Overstock.coms OpenStack Networking Strategy
alright welcome were going to talk about over suck calms of evolution if you will from nova network to neutral on something that weve done in the last year as we progressed along our cloud journey so my name is Mike Smith Im a cloud systems architect at overstock I have on the front row with me if youd stand and wave this is cynthia from itakura this is really about overstocks evolution here on the networking space but Cynthia in particular and mid occur in general is a key part of that a volute evolution so I asked her to be here as well at the end were going to make sure to lead some time if you have questions about meta corazon the stuff theyve done with us certainly anything that weve done over stock were happy to take those but I wanted to point her out so that you know where to direct those questions if you want to catch up with her later I guess I should also say introduced over stock itself there are a lot of faces to overstock com many of you probably know overstock from our website and the products that you can purchase online there we have millions of them and being Mothers Day thats what were seeing there today but theres a lot of other faces to overstock that you probably dont know exist or youre not as familiar with as you are our kind of core shopping website so i want to introduce some of those those faces one of them is worldstock world stock is you know part of our website youll see it on there but its really a focus on artisans and craftsmen from developing nations and giving those guys a fair shake a fair deal fair trade to present their products on a world stage through overstock much of that money that is generated from that and beyond just going to them its also funneled into schools and things like this in these developing countries so its a pretty cool project another one that you might be familiar with is pet adoptions by overstock this is something we talked about at a previous summit it was one of the first things that we launched on her over start our openstack cloud out stock pet shelters from around the world heres some pets right here in Austin Texas that we use our search technology at overstock to make connections between pets that need homes and pets homes that are willing to be given Main Street revolution is another aspect of overstock which really is about giving small business mom and pop shops exposure on a worldwide national stage so everything you see in the main street revolution site of overstock is in that vein recently overstock has launched a farmers market campaign a business where youre able to connect with local farmers in your area and have things delivered straight to your door and thats a pretty cool thing a lot about you know less time and in transit for produce and things and really getting local to where you are our stock is also known for Bitcoin and being a big proponent of Bitcoin we were the first major retail retailer to accept Bitcoin and was the first to accept Bitcoin no matter where you are in the world which is pretty big deal we actually have a Bitcoin ATM right in our lobby which is kind of fun to see so with that in mind overstock was founded in nineteen ninety nine and behind that shopping site experience that you see theres really some 500 plus applications behind the scenes most of those applications are java-based using oracle databases on the backend and traditionally all those apps were housed on physical hardware because weve been around since 99 you know these are not apps that were born in the cloud these are these have been apps that have been around for a while many new ones come out every day and its all part of this you know service-oriented microservices type architecture that is behind the scenes overstock OpenStack though now plays a major role not only in dev and test and staging disaster recovery but actually on the production website and were going to talk about some of that one interesting aspect is in dev tests we actually have 160 today distinct separate dev test environments that are all powered by OpenStack as well today we operate six OpenStack clouds in a variety of locations actually three distinct geographic locations some of those are ones were phasing out in turn in favor of the newer architecture that were presenting here today this is kind of a more realistic depiction of our evolution of OpenStack overstock and we first experimented with OpenStack in about two thousand thirteen and that was during the Folsom release some of the we have one of those Folsom clouds actually still around today just in its final throes of decommissioning by June of 2013 we had that dev cluster online and we brought up a Havana cluster for production first in November 2013 we then took 2014 to really arm ourselves with a whole lot of of homegrown orchestration we shopped around for orchestration vendors we really decided to build our own and we spent a lot of that year kind of research and development how we wanted to do that and that got us to where we are today with our next-generation clouds that we turned up last May today were on kilo moving towards liberty and if you look at this timeline theres a very important event that happened right about there were going to talk about those events so first those three red lines those represent you know peak shopping seasons for us fourth quarter of the year holidays Christmas Thanksgiving that obviously is where a retail site like us gets their big influx of revenue for the year each of those red lines we learned a lot that first with our first production cloud in November 2013 going into that season we just dabbled with OpenStack in production by the next year we had not only things like pets but also kind of core shopping service that were on there and by 2015 this last year we had virtualization on OpenStack in par with our physical infrastructure that was supporting that shopping season but were going to talk about this pivotal moment in our evolution so Im going to invite you to hop in my DeLorean with me we are setting our time circuits to November fifth 2014 this is actually just a quick detour before we get to that pivotal moment here we are we landed the OpenStack summit in Paris and I was there to present kind of what we had done with OpenStack up to that point at overstock we talked about our cloud journey some of the problems that we had including some of the clustering technology storage technology we were using but also the network that we had in place we were very dependent on a active failover Nova network implementation and this is what I said there I would say that the network failover delay that we experience when we try to failover rha nova networking is probably what keeps us from a broader adoption of this in production so thats the area of focus for us right now okay so thats where we were Im actually going to show you a demo of what this used to look like so what you see here is one of our clouds its actually the one I mentioned before that all right was my cursor that were in our final stages of decommissioning so what youre seeing here we have a pair of controller nodes that run a whole lot of services we have some compute nodes still in place there and were about to fail over the network nova networking service youll see it stopping here on one of the nodes itll start now down here weve got a couple of our tenant gateways this particular cloud at its height housed only 100 networks they were all VLAN based and so the one on the left is kind of the first network tenant in that range and the one at the right is one thats near the end so when Nova network failed over youd see a lot of iptables work going on here and very soon were going to see both of these gateways drop out completely and this commences are 15 minute or so network failover that we had on our older cloud Im going to skip forward just a little because it will get real dull real fast youll notice that the first tenant actually comes back you know relatively quickly and that was one of the lessons we learned is that you know just because it worked well on the first 10 it doesnt mean that the last guy in line is going to have the same experience so this first tenant is now pinging is back available but Nova network goes through and it starts up a DNS mass process for each and every Network one at a time restoring IP tables for forwarding and things like this and by the time this this guy at the end gets in the game it can be many minutes into the process so here were about six minutes in seven minutes in before that particular tenant starts responding again now if we skip way near the end at the end of the process were going to see that finally Nova Network Rica next to the cupid messaging queue and you know both tenants are back up so that is what we use to experience this is what Im talking about when I was saying this is what was holding us up from adopting OpenStack in production and why it was such a critical component for us to address so were actually now going to go to the day previous to this okay November for 2014 back in Paris now in Paris the day before Cynthia Thomas about to present the life of a packet theres Cynthia theres me miss Thomas presented a tale of distributed gateways tale of software virtualization and networks meadow net as a example of a neutron plug in and kind of talked about what cool things they were doing with that technology this was pretty much my reaction and I knew when we got back to overstock headquarters that we needed to get in touch with these guys and we began doing some experiments in our labs made scale models and a number of things and that really led us to evolve in a number of ways and so Id like to talk about what those were we reworked architectural II are in our entire OpenStack technology stack we move to Juneau and then kilo and like I mentioned were soon moving to liberty we ripped out a nova network we took the step delete and went with Neutron and we used me donuts plugin and with a bgp up link to our routers which will show we got rid of the active passive cupid that was prevalent in those earliest earlier releases and adopted the distributed rabbitmq replaced an active passive my sequel cluster with maria DB in a glare plugin and replaced our cluster gfs storage with red hat set storage as well as extreme io from emc the our hcs clustering that we had that kind of limited some of our growth weve moved to it to a shared nothing architecture which allows us to do that we do implement some pacemaker clustering for a few services but that older redhead closed trains going so that was the pivotal moment that really allowed us to evolve to the next level to the point where now by default all production services at overstock com go to our OpenStack enabled clouds which is a pretty big move for us so lets talk about what this new architecture looks like for us this is pretty much what you see is what you get so those two blue routers are physical Network routers and one of the things I really like about the way me tonette implements its Neutron plugin is there is a clean separation if we want it between those upstream routers and our virtual gateways down below and that is bgp in our case so through bgp peering we advertise routes that we want to make available to our cloud and they advertise default routes back to us and there is a java agent that goes on every single one of those compute hosts and gateway agents this is the middle middle net software they call middle man and on our controllers we run a zookeeper cluster and this is used by me tonette to store all of its network topology so everything you define in neutrons which is important routers security groups and rules all of this stuff gets stored within zookeeper and these agents that run on your computer on your gateway hosts use that information in a pretty interesting way it does a lot of simulation has simulation threads that if a packet is about to flow sorry real quick this is what it looks like in your Neutron configuration if you want to enable me to net theres the plug in line so if youve got a packet thats come in through your routers and its sitting on one of your gateways and the beautiful thing about this is its distributed gateways so if we need to add more we just add more and theyre all equal weight its distributed across them and if one were to disappear the others take the load so no more of this I have to failover from a passive node to an active node and all my traffic routing through a single gateway that is really the key to what Neutron and we dont have allowed us to do so if weve got a packet and a flow of traffic thats sitting on gateway to in this case and it wants to flow to a vm on that compute host down there what it does is these job agents talk to the zookeeper network state database they figure out all these virtual network entities that dont exist in real life right ports and routers and security groups it has to kind of flow through and figure out if that traffic is allowed where it goes where it leaves from and as soon as it does that it doesnt have to flow through anything its a direct VX land tunnel from the gateway to that compute host and direct to that vm which is pretty slick so if you didnt catch it mito net is a overlay on top of a basic network underlay so our network engineering team with their routers and whatnot provide an underlay Network and me no net as a software supply is an overlay on top of that thats implemented with these VX lan tunnels you can use GRE as well so on the other side if a compute post on a vm on a computer host wants to talk to say another vm on a totally different computers those agents are empowered to go query that network state database do that calculation for the first packet of that flow once theyve done that the Colonels program to how that flow is going to work and its a direct tunnel from that computo stew the other computer doesnt pass through a gateway doesnt pass through any other kind of networking piece its just done in software and then I guess to complete the picture if we have a vm on this computer in it needs to get to an external somewhere outside of our private cloud that same calculation is done it flows up to the Gateway directly and out through our routing infrastructure so that is how that is implemented and Im going to show you a demo of how that looks like today so when I did this demo I showed three gateways on our screen there at this particular point in time there were only two that were active so thats what youre seeing so to set this up what youre seeing on the top is the actual two of those gateways that are shown on that other slide so one of them is about to be powered off through the UCS kvm remote tools there so youll see these this top section stop pinging altogether what youre seeing in the middle are two VMs we have several hundred networks on this particular cloud as opposed to the 99 VLAN based ones we had before but notice that even with this gateway completely shut down and the Gateway stopping to pain the VMS that are being routed to are unaffected so youll see this gateway node here start power up do some cisco branding and eventually hell start coming back online okay so again no impact as it comes back up these VMS continue to network as they should our gateway physical gateway here has started responding again and were about to shut down the second one and the second gateway here has now been shut down completely so well skip forward to the end here and to me the real important part of all this is this right here so we stopped our pings on these two and both of them showing 0% packet loss which is huge for us so what this really means is this kind of maintenance that you saw was a screen capture i did in the middle of the day people using the service i didnt notify anybody i didnt have to worry about that its its built so that i can take it down we can patch it we can update it we can reboot it and the same thing applies for those three controllers that you saw on the side where they have zookeeper and this network state database i can take one of those out i can take a gateway out i can add a gateway its very non impactful to what were doing real quick to summarize oh you know what i am going to show you this real quick were such fans of what weve been able to do with our partnership with MIT occur i asked if we could do a little spot with them and im going to show you that overstock.com is one of the top online retailers we ship both domestically and internationally in order to make the shopping experience possible takes an enormous amount of infrastructure software and a lot of talent we tend to hire really smart people and invest in their training and we feel like we can do a lot better job supporting our applications our systems are solutions and most vendors good when we were first implementing OpenStack in those earlier days there wasnt a lot of people talking about doing high availability for services we were really looking for flexible standards based solution that would let us do what we wanted to do with it we really value those things in the entire OpenStack ecosystem open source in general it became evident that a lot of the things that MIT ocurra had in place and what they were working on were solving the exact same problems we were facing with their active active distributed gateways and a lot of other features that we hadnt even thought of they already had on the table during implementation they were extremely flexible and working with us and let us do things the way we wanted to do it wasnt a big stuffy professional services engagement if we didnt want it to be it was very much you know what look at the code try it learn it ask questions and that flexibility is exactly the way we wanted to operate I actually love working with our team at MIT occur in fact i brag about them all the time and the way they handle their support to the other operations groups its its a fantastic way that theyve kind of innovated to technical resources business resources executive resources available to us instantly when we need them and I can count on them to be there okay so thats a very genuine heartfelt example of our partnership with them and theyre fantastic to work with real quick here is essentially our platform in summary we do run sent seven and we use the RDO release of OpenStack so this is Red Hats community version we are a combination of UCS and Dell hardware and the storage for our clouds lives on redhat Seth storage as well as extreme i/o from EMC and of course Neutron with emitter enterprise me tonette they offer a couple different options including a pure community open source version or a fully supported Enterprise version and thats it so I do appreciate all of you being here Im happy to take any questions you have for me im happy to direct any questions to mid occur it as well and i just thrilled that we could be here so if you have any questions please raise your hand yeah I believe theres a mic over there as well so yeah just if youd stand up and speak into the mic so I was blown away by the UCS servers what kind of expensive great network performance Im a networking guy okay why did you land on the UCS servers instead of more of a commodity type computer platform sure thats a good question and in fact when we mentioned commodity hardware and UCS in the same sentence we often get laughed at as far as thats not community thats not the commodity we do have both Dale and and UCS in-house he liked the the UCS features that it gives us as far as things like additional interfaces that we can just add in software instead of having to add a physical interface in there it also allows us to do some things where if one of those hosts were to die its all boot from San and they could shift it over to another node thats standing by a physical server node and bring it up so its just a choice its one of the nice things about working for overstock is in a lot of ways were like a start-up but we also have the capital and money behind us to do some use some really cool technology so the short answer is is because we can that said if I didnt have that choice I have full confidence in the architecture that we have to wear those kind of nice to have features arent needed for that kind of filler I believe you have a line question there yeah so I noticed one thing was conspicuously missing from your presentation ipv6 I talked a little bit about your journey there and any challenges that you found sure ipv6 is on our road map its not something thats widely adopted with an overstock today so that journey has yet to be charted so can you talk a little bit about the transition from a physical Network over to the virtual network and what led you to the solution that youre leveraging today and with the UCS platform does that kind of imply that your previous life was with VMware using unified data center solution we do run some VMware on some other teams overstock its more of our corporate corporate applications this kind of thing but this team really hasnt done VMware / saying that before OpenStack we used kind of home groans and based virtualization and I apologize i totally blanked on the first part of your question sure sure so that our original implementation was done VLAN centric and very much coupled to that physical infrastructure and that was kind of one of the things that was holding us back is every time we wanted to add a new tenant add a new network and we wanted one network pertinent wed have to go and create a VLAN and theyd have to set up the routing on their side and all that on our networking inside the beautiful thing about this and one of the reasons that we selected mid occur over some other options is they have a very clean oops very clean division there if we want it and that is that BGP layer so our upstream network is really traditional network nothing virtualization oriented about that everything software oriented virtualized is from that bgp down so there really wasnt a transition there now we we took a slightly different path to get from New trinova network to Neutron if you look back at that Paris summit ebay gave a fantastic presentation about how to automatically move from Nova network and do a whole bunch of scripting and come out with Neutron at the end what we did is we just had of our homegrown over castration platform and we can provision to our old cloud and our new cloud and we just started provisioning to our new club new cloud so we didnt have to do a conversion no upgrade no migration if that makes sense see a couple questions over here yeah I think to the priests oh thats cool stuff sure can you talk a little bit more detail about the failure mode of a gateway going away how long does it take for the router to see the BGP Pat torn down and is there some sort of like VFD going on there to announce the health of that link sure there are a number of options and and and the miniature pokes might be able to talk about what some of those particular options are we have our timers are bgp timers set very aggressively to like a very small number of seconds they do have some options around BFD and some other things some of our routers dont necessarily support those same things so today were not doing any BFD type situation is just very aggressive bgp timers lets go right here and then well go over there can you anything about the scale of Food Networks 30 able to do on sure ours is a fairly simple Network where we give a network per tenant our larger cloud probably has about 150 different networks and one thing that is unique to us and again one of the reasons that we liked the solution that they brought to us we dont use floating eye peas we dont use public IPS we use routed IPS from our corporate infrastructure direct to every single VM so there was a little extra step that we had to do there many of the other solutions at the time including neutrons DVR and everything else relied on the fact that you had to have a floating p.i.p for my IP pool that was absorbing that traffic and forwarding it on we do some tricks there to allow every corporate desktop essentially to get directly to the VMS words appropriate now we can put security rules and all those kind of things in place but thats kind of how we did that yeah did you see any performance hit in terms of like going from a physical infrastructure to more software-defined Network we actually as far as a virtualization platform we actually saw a performance increase so what was we did performance test with bandwidth and how fast we could move data as well as how many small kind of sessions we can get going on as well as the failover and everything in every one of those categories we saw an increase in performance thank you sure how about Christian Post are two how do you manage your physical switches and her mental metalworking para mental networking and physical switches so the again it one of the nice things here is that theres this kind of clean division between weve been around for a while and so we have a lot of existing infrastructure switches processes routers this kind of thing and so the way we manage those really hasnt changed much they that we have a data center team that does the switch configurations for the servers we have a network team that does the routing and stuff for the rest of our infrastructure and the two are essentially cleanly / this bgp border that makes sense so too they consider any a steinway like open delight or owners or something yeah we dont have any open daylight or any thing like that in place in our in our new facility they are moving a direction like that that Im not going to mention because thats their deal but they have the infrastructure that this sits on is not in that category okay we have one over here yeah it look as if in the networking layer that you had a in the overlay Network you had a direct tunnel between every computing node so the number of tunnels would grow as N squared in the order of N squared as the number of computing nodes thats I mean would that wouldnt that be a limitation as it scales up okay so and if any of the Medicare folks want to correct me on this please feel free but these tunnels arent persistent right so as a flow comes in a tunnel is established that flow is is handled that tunnel is dropped and so there are very specific sizing that they can recommend for how much you know memory and heap and how many sessions its going to allow and these kinds of things before you need to kind of scale more compute nodes more gateways separate tunnel zones these kind of things so it is a tuning thing it is a scaling thing but we feel comfortable that that those dials all exist within the plug-in so weve been testing nuage open date well about to look at opened a lie VTS one of our biggest problems has been I think somebody else brought it up was deploying the underlay the bare metal the switches and syncing that up and we try to push networks down to OpenStack and giving the end user the power to add that network have you solved that have you automated the underlay or is that still a manual process thats a good question for us we havent had to solve that challenge yet what I mentioned in that time where we were kind of spending a lot of time on provisioning and orchestration those efforts were really around allowing our end user developers to create entire environments that create tenants and networks and all these things in software we really havent had the challenge of perhaps some of the larger growth that that some of you may have in the physical provisioning site We certainly have liked our server provisioning through things like puppet an ansible thats automated we really havent had to go get to that level if that makes sense okay all right that looks like the end of the questions but thank you very much for coming and youre welcome to contact me Im sure the folks might occur would love to hear and field any questions as well and thank you for coming you Overstock.com recently made the big leap from nova-network to Neutron networking in order to increase network resiliency and reduce failover time. We also chose to use the MidoNet Neutron plugin. The results have been fantastic! It used to be that moving 100 networks from the active nova-network node to the passive one resulted in network downtime for fifteen long, finger-crossing, pit-sweating minutes for us. Now that we have fully distributed, active/active network gateways, there is no fail
cineberisso.com.ar