Rackspace resize how long




















Kubevirt Community. Libvirt Community. Mongodb Community. Mysql Community. Network Community. Okd Community. Postgresql Community. Proxysql Community. Rabbitmq Community. Routeros Community. Skydive Community.

Sops Community. Vmware Community. Windows Community. General » community. For information about Nintex data centers and where your data center is located, contact your Nintex account or territory manager. Before implementing, consider performance and regional concerns where digital boundaries are an issue.

For more information, see In-tenant and external actions. Action is available only with the Nintex for Office Enterprise Edition.

Nintex Community. Nintex University. Customer Central. Legal Notices. All instance sizes get the same 4 cores and about the same compute resources. CPU performance is roughly the same on a 1GB cloud server as an 8GB cloud server, you are just paying for more memory.

EC2 on the other hand offers linear CPU performance improvement on larger sized instances. EC2 also uses a heterogenous hardware ranging from Opteron or Xeon E for m1 instances; Xeon E for c1 instances; Xeon X for m2 instances and Xeon X hyper-threaded to 16 cores for the cluster compute instances. Make sure you explore the limits of EBS before assuming it's a perfect solution.

We've found it to have incredibly slow throughput at times. However, I've decided that the other benefits of EBS volumes are just to important to give up on snapshotting, lazy-load, re-attachment. Instead, I plan to monitor for such situations and blow away the node when I detect it.

But really, I don't understand why Amazon doesn't fix this. It's happened to me times in a relatively small installation. Surely their monitoring can detect this? I can handle suffering, but completely freezing? I, too, have seen these freezes occasionally on small instances. If you think that is harsh then try one of the new micro's.

If yours does not recover then that would clearly be a bug. Is it because of the network throughput on the small instance type? I haven't noticed anything nearly that severe with our instance setup of larger instance sizes, but then again we've only run production on EC2 for a few months now As a heavy user of EBS, I heartily concur.

The ability to move EBS volumes around is handy but you pay it back in inconsistent, poor-to-mediocre performance. Good luck. We had these issues as well.

I'd certainly try AWS, but I'd wait a couple of months before shelling out the upfront cost for the Reserved Instances. This blog post walks through a bunch of things you can do to make the best out of the flexibility you get with EBS, and includes scheduler tuning, RAID block size planning, etc.

How do you get a consistent snapshot of multiple striped EBS volumes? EBS costs are pretty low. And my above post was directed to someone who is deploying a database on top of EBS. Sorry if I didn't make that clear.

Snapshots are still doable, they just require coordination of all the underlying devices to make sure they are all in a consistent state. The way you do this can vary but it's not particularly hard. It just makes it more complicated. You have lock and snap all of the volumes at once. The ec2-consistent-snapshot tool makes this easier to manage.

This is a rather unpleasant review of a service we were about to move over to. For companies that have moderate performance requirements e. We've been considering Rackspace Cloud and Linode, but are open to any suggestions. AWS EC2 also sounds like a great fit in your environment.

Regardless, having a great server admin guide this implementation will be a key part of the chosen architecture success. I run a server management company, so I have a bit of a bias ;. I've used Rackspace, Amazon and GoGrid.

I'm currently almost entirely on GoGrid and just finishing up migrating back off of Amazon for the second time Migrated some services back to them when they opened west coast. There have been some stability issues in the past year instances going down for maintenance regularly but haven't had a problem with that in a while and was advised it was due to them upgrading all the servers in their cluster.

The ability to mix dedicated and cloud systems easily has been the killer feature for me. We also prepay and the number of servers we get for the cost is pretty good. MichaelGG on Nov 9, parent prev next [—]. Have you looked at just buying the hardware and colocating it? Bandwidth can be found for a few dollars a meg. I wrote a thing about the pros and cons of doing that, but instead I thought it would be better to give you some objective numbers.

Note, unless you have really cheap hardware guys and downtime costs you little, I'd avoid used hardware. I started on used servers, and I no longer buy used hardware that is user-serviceable. I avoid used servers. But I may buy used switches. If you want more advanced hardware troubleshooting, double that for someone really good.

Setting up a server should not take more than a few hours, including hooking up the serial console, etc And hardware problems? MichaelGG on Nov 9, root parent next [—]. Your point about onsite hands is something I forgot. In the companies I've had, I've always had offices adjoining the datacenter.

I've been lucky to end up with such great spaces. If you don't have onsite hands, then fixing even minor hardware issues can be a major pain.

If you do, then cheap users servers are fine, if your software is fault tolerant. If a machine goes down, others takeover in a minute and you have slightly less capacity. No big deal. I just see these crazy high numbers for hosting, especially "cloud" stuff and don't get it. Cloud in particular seems to only make sense if you're actually elastic, or scaling up very fast.

Amazon and so on's pricing, for "always on" servers, does not look that appealing. Personally, I think the very best use case for "cloud" is backup servers. Think of it, run a 'small' instance to slave your database and other stuff to EBS storage, and once a month practice bringing up production on "the cloud" - cheapest and most complete backup system I can think of.

Also, the he. Note, don't expect more than "remote hands" and make sure you clearly label everything I really appreciate your insight on this.

I'm going to take this back to my team and discuss. It isn't worth it. Outsource hardware management to people that do it for a living. And certainly, if your cost per compute node doesn't matter, few will fault you for going with amazon. But you should be aware that you are paying for that, and in some situations, having a high cost per compute unit will kill you.

This is a fair question, so I don't see why it should be downvoted. The reason outsourcing hardware to AWS or Rackspace makes sense versus hiring hardware guys usually doesn't is that you introduce one more link in your failure chain. In My opinion it is far, far more likely that you and your hardware guy are going to mess up ZFS configuration somehow and have it fail when your app gets black swan traffic than Amazon going down.

When it comes to Engine Yard, I'm all for that too, in the right circumstances, but you get fairly high lock in. But sure, Engine Yard and Heroku are great. While I agree that amazon itself is unlikely enough to go down that you can pretty much bet the company on it, any one server at amazon may go down at any time. Now, amazon has the huge advantage of being able to give you another server at any time, but you need to be prepared for that.

That's a good point - you could argue it doesn't free you from the need of a good admin, and it actually makes it more difficult to hire someone who knows what they are doing; good cloud SA skills are much more rare than good traditional SA skills. When you can just drive to the datacenter and toss in some more disks or drop your new F5 on there or hack it together with CARP , but instead have to be familiar with exactly what the restrictions and use cases around the cloud providers offerings are; the field of candidates just got painfully smaller, and probably more expensive.

Your basically paying them for the luxury of not having to deal with AWS. So as long as they don't get undercut by a competitor and all their customers flee, they should be around for a while.

That or their owners get greedy, sell the companies, and the new owners move to "more economical hosting". Na, that never happens. Just like you are paying AWS for the luxury of not having to deal with hardware. Both can be rational decisions or not, depending on many factors I'm just pointing out that the people who deal with racking and stacking the sort that amazon allows you to fire are rather a lot less expensive than Linux SysAdmins, the sort who app engine, heroku and engine yard allow you to fire.

Depends what your app is written in. I don't know if there are similar offerings that are reasonably priced yet awesome for other languages though. I found engineyard very unflexible Sure you can create custom recipes to get around that but since they update their system without warning, the custom recipes end up becoming stale Yes, but the problem is that they use their own gentoo based environment that they update as time go To do anything custom that they didn't plan beforehand, you need to create custom chef recipes that will be executed whenever a new instance is provisioned or if you deploy with their web interface but not if you deploy with capistrano.

The problem is that it's hard to make sure that the custom chef recipes will work when the underlying environment changes and you are not told ahead of time of those changes What sort of stuff was in those recipes? Linode are wonderful - best support I've had from anyone, ever. Nice instances, too! I'm doing all future development on Linode. Their basic VPS is pretty quick for the price and it's online almost instantly. It's still just basic Linux, so you can rig everything you want - no awkward managed interfaces to worry about.

I would consider switching to a different company, but for now Linode scratches my itch better than the rest. I'm sorry to read that you've had a less than stellar experience with us. May I ask what happened? Create an account. Open a service ticket. Wait hours for response.

Plus no firewall options on your Nitro servers, at that price range, is ridiculous and way below the standards of the industry. They don't have that crazy backup limitation that rackspace does, and they're always open to custom inquiries.

Rackspace has the big company mentality -- if our interface doesn't have an option for it, it doesn't exist. I second. Moved from Rackspace to Linode. The bandwidth overage charges are much more reasonable, too. I like Linode, but they are wearing a little thin for me. In general, uptimes have been excellent but the physical server hosting one of our VPSs had three emergency reboots in a little over a month before they replaced the hardware.

Perhaps they'd already planned to do a replacement after th e second incident but I had no way of knowing. Which brings me to my second beef, which is that they don't share much when closing a ticket. Their explanations are pretty prefunctory.

In some cases, like transient network issues I generally can't tell if they've investigated the problem and done something about it or if the first-tier tech just takes a quick look sees it's working now, and punts it back to me to let them know if there are more problems. Similarly, after a major screwup that took down most of the virtual servers they host including the ones that hosted their own website and web-based management tool , all we got was a brief explanation of what had gone wrong and a status blog hosted on someone elses infrastructure.

What I didn't get was a sense of how they were going to guard against anything similar happening again. I'm sticking with them for now, but I'm also feeling like the effort I've put into automating the configuration of our application and all it's dependencies has been time well spent.

I can boot a new infrastructure on another service in an hour or so and migrate to it with minimal downtime. What are they like for CPU usage? I have been looking at VPS. Most VPS providers I have come across dont like it if you start working out the millionth digit of Pi, but sometimes you have a task that takes a while and you can't run offsite.

I believe each physical host has at least 8 real cores. They give each instance 4 virtual cores. As long as CPU time is available, an instance can use the full capacity of four cores. Further, larger instances are shared with a proportionally smaller number of other instances. The only thing to look out for is that there is quite a difference in single threaded performance between their newer hardware and their older hardware. Our app is written in one of the popular interpreted dynamic languages.

Generic benchmarks for that language showeda x difference on some tests when run on the different hardware, and our app showed similar differences on CPU bound tasks. There are a few implications of this: First, it complicates things when you try to make your staging environment mirror production.

If not, open a ticket and ask them to migrate instances so things match. Second, while they try to size things appropriately so that your guaranteed CPU Is the same, regardless of which generation of host you are on, your peak CPU is going to vary dramatically since you get 4 cores, irregardless. I hear lots of awesome things about Linode, however they are actually hosted by Rackspace so with Rackspace's known reliability issues Another service to consider is VPS.

We have 6 virtual servers with them. They are cheap and perform well, their control panel is great and they have tons of great pre-configured OS installs. However downtime can sometimes be an issue. Usually it's not major, e. Like I said though the outages are usually minor. There's only been one that I remember that was major, and lasted several hours. We've been using VPS. Not sure I'd recommend them for your setup but we only use them for a custom built CDN and for that it works very well.

Are you confusing Linode with Slicehost? Slicehost was acquired by Rackspace. Perihelion on Nov 9, root parent next [—]. Nope :. Phew, thank you. NET servers hosted on Softlayer data centers. JustHost on Liquidweb dedicated servers. Hostmonster on Bluehost servers. Rackspace cloud's DNS stuff stinks. No way to add TXT records -- you have to open a ticket! Sure you can host it yourself, but every other cloud provider has this in their UI. I get the feeling they're just in "maintenence mode" over there and don't have anyone working hard on improving the offerings.

Their DNS used to be great, if simple. You could put in a record for your new server and by the time you'd logged in, the record was live. Now, it's gotten to the point where their DNS updates so slowly that I just re-use old subdomains instead of creating new ones, just so that I can get my project tested today rather than tomorrow. I'm working on moving everything over to Amazon; more flexibility. I like Rackspace Cloud, but they're falling behind and they either don't know, don't care, or can't catch up.

What is even more sad, is that their DNS console for hosted servers and Slicehost is so much better.



0コメント

  • 1000 / 1000