Transcript
Introduction to OnApp Cloud v3.0 beta Webinar, October 16th 2012
Today’s presenters Andy Perrin Product Marketing
Stuart Haresnape Product Manager
Julian Chesterfield Storage & Virtualization Architect
Agenda
Introduction What’s new in OnApp Cloud v3.0 Getting started with the beta Documentation Demo Q&A
About OnApp OnApp software makes it easy to build and run your own cloud services
Founded July 1st 2010 400+ clients 1,000+ cloud deployments 100+ CDN PoPs 25,000+ cloud applications
About OnApp Cloud A complete cloud management platform for service providers
Billing layer
Utility billing, hourly/plan-based, user/reseller-based...
User Management layer
Role-based user admin, resellers, iOS/Android apps
Deployment layer
Provisioning, auto-scaling, failover, load balancers...
Hypervisor/OS
v3.0 is a major new release, now in public beta.
v3.0 helps you… • VMware support • Live Streaming
Enter new markets
Reduce costs
• Video on Demand • HTTP push
• Integrated SAN
Speed up deployment
• Cloud Boot
Simplify management
• New User Interface • New Stats Receiver
• Filtered Logs • EXT4 Disk Support
• Password Complexity • ICMP (ping) Blocking
• Disk Wipe
Improve security
GA expected in Q4 2012
New Features
Improved user interface Updated look and feel with renewed focus on usability Wizard-driven Simpler VM Management Roles & Permissions Dashboard CDN
VMware & OnApp OnApp can now manage VMware ESXi hypervisors Requires external installation of vCenter & Vyatta Community Edition firewall Developed for the VMware VSPP Standard Model If you’re using the VSPP model you’ll continue to pay normal vRAM pricing OnApp licensing remains the same
VMware & OnApp: architecture Cloud boot
Hypervisor Zones vMotion VMware Cluster
OnApp
ESXi
Xen
KVM
VMs
ESXi
ESXi
HVs
Xen
Xen
KVM
KVM SSD / SATA
OnApp Distributed SAN vCenter vSphere 5.0 vSphere API
VMware vCenter manages individual VM resources & migrations OnApp sees combined resources of all ESXi HVs (CPU, RAM, Disk)
OnApp Control Panel
Xen / KVM
OnApp manages individual VM resources & migrations Supports OnApp Distributed SAN & Cloud Boot
VMware & OnApp: things to note There are some things you shouldn’t do in vCenter E.g. Power VMs On and Off
There are some things you can’t do from the OnApp UI Migrate VM, Perform backup, Reboot Hypervisor or put it in Maintenance mode
Other VMware scenarios have not been tested E.g. SRM or remote clusters
Backups VMware
Uses the VMware Snapshot tools Takes a copy of the full state, RAM contents and machine build Xen / KVM There are two models:
Basic - backups on the HV (pre 2.3.2)
Advanced - backups on the Backup Server (2.3.3)
Backups are now account-centric rather than VM-centric Can now delete all backups when deleting a VM
Cloud Boot Dramatically reduces deployment times
Fast provisioning of Xen & KVM HV’s, with no pre-installation
Server boots over network (no local storage required) as a fully configured hypervisor, ready to host VMs
Currently boots CentOS 5.6/6.0 over NFS root
DHCP service and a PXE server must be enabled
All HVs must reside on the same VLAN
Open Vswitch not supported
Other new features in v3.0 Password complexity Disk wipe
Enforce rules for allowable passwords, minimum length, special characters and numbers Zero a disk when deleting it
ICMP (ping) blocking
Firewall rules to accept /drop TCP, UDP or ICMP
SNMP stats receiver
Collects stats from HVs and VMs, transfers to CP server for processing using SNMP protocol
Open vSwitch
Multi-layer software network switches, providing VM’s with network connectivity
KVM CentOS 6
KVM has been re-engineered on CentOS 6 to give better stability, security and performance
Plus hundreds of other fixes & improvements
Enhanced CDN capabilities Meet the huge demand for video and streaming
Video & live streaming CDN:
RTMP / RTMPE / RTMPT
HDS
RTSP/RTP (Android devices)
iPhone
SilverLight
Video Delivery via HTTP Pseudo-Streaming (FLV/MP4)
Redesigned UI & Marketplace
Other improvements:
HTTP push
E.g. YouTube
For large static content
Enhanced HTTP Pull
Multiple pull origins for redundancy
Better/smarter DNS redirection
Integrated with Google & Open DNS
Integrated Storage OnApp Storage (our distributed SAN) is now included with OnApp Cloud v3.0 Enterprise-class storage platform
Based on your existing cloud hardware investment
Leverages existing integrated storage drives across cloud hypervisor nodes
Presents these as a single, virtual SAN
Can run alongside or instead of your centralized SAN
Key Storage features Fault tolerant
High performance
• No central single point of failure • Self healing & user-initiated repair
• Configurable striping • Optimized Disk I/O
• Configurable data replication • Easy offline data migration
• Optimized VM placement • Multiple performance tiers
Low cost
Built for Cloud
• Over-commit improves hardware ROI • Supported on commodity hardware
• Cloud boot for fast deployment • Linear growth with hypervisors
• No need for separate SAN • Support & integration bundled in
• Online hot-swap of drives • Physical & virtual disk IOPS reporting
How does Storage work? A smart, independent array of storage nodes Virtual Storage Drives:
Multicast channel
Customer A:
Customer B:
HV1
SATA 5600
SATA 7200
HV2
SATA 5600
SSD
HVn
SATA 7200
SSD
Bonded NICs for bandwidth aggregation
Commodity fast-switched ethernet backplane
DATASTORES High Mid Low
Optimized virtual machine placement When migrated, VM is located on the HV whose physical disk contains the data replica
VMs
1
2
3
4
5
HVs Physical disks Virtual disk
VM with data replicas
Writes are distributed, reads are local to minimize network traffic
Takeaways from Storage beta Hardware pass-through PCI bus resource sharing Hardware driver support in the storage controllers Better support for paravirtual mode is required
Network performance is critical Setting up Cloud Boot/PXE boot requires better guidance Customers may benefit from complete deployment reference including IP address schema, network diagrams etc..
vDisk Performance Profiling Focus on optimizing data path performance With large block sizes able to saturate disk SATA bus for 4 SSDs over a local path at a max of 1100MBytes/s Request sizes 64K and above are equal to raw disk performance Requests sizes 4K to 64K require additional optimizations to get up to raw disk performance
Network Performance Profiling Logical NIC bonding support is fully integrated KVM on CentOS 6 bonding slightly outperforms Xen on CentOS 5 Bonded NIC performance (no hardware LACP) achieves around 90% of the combined physical layer max Xen requires even number of NIC pairs for best performance
Storage developments planned for GA Over-commit Support for standalone provisioning server Resizing virtual disks Online hotswap of drives Dynamic disk content migration Physical and virtual disk IOPS reporting
Getting Started
Important notes V3.0 beta available for download today OnApp customers: https://dashboard.onapp.com New to OnApp?
http://onapp.com/cloud-v3-beta/
No restrictions on scale (clouds/cores) However, please note: Not for production use! CDN not included Upgrades from R2.x not supported for beta Storage Beta customers should do a fresh install of 3.0 Beta
Requirements for v3.0 beta Controller Server
Hypervisor Servers (Xen/KVM)
8GB RAM (16GB+ recommended)
8GB+ RAM
Dual or Quad Core 2Ghz+
Quad Core 2Ghz+
100GB Raid 1
Any SATA/SSD hard drives
2x Gig NIC
3+ NICS:
NIC1 – uplink
NIC1 – management network
NIC2 – management network
NIC2 – storage network
CentOS 5.X x64
NIC3 – appliance network
NIC4 - NICn (if available) assigned to storage or vMotion
Consult VMware documentation if running ESXi HV servers & vCenter Check the Preparation and Installation Guides for Network Configuration & Hardware specifications
Online documentation Admin Guide API Guide Release Notes Deployment Guide Installation Guide
https://onappdev.atlassian.net/wiki/
Demo
Q&A
Q&A
For integrated SAN, do we need setup hardware/software RAID? What is the recommendation? Certainly you can use hardware RAID, though it’s not a requirement. In fact, because of the way we build redundancy and fault tolerance across hypervisors, you actually get more benefit from using software applied RAID across the whole platform. Obviously if you group physical drives together at the RAID controller level on each server, that’ll give you tolerance to individual drive failure - but not server failure – so that’s something we can apply at a level above. Really it’s up to you: you may get better performance using hardware-applied RAID directly on the server, but if you want to make sure you can access storage in the event of a hypervisor outage, it’s important that you have replication across hypervisors as well.
Will there be RAM over-commit? For VMware, yes. For other hypervisors, it’s something we’re still investigating and considering.
Q&A
If you do shut down a VM from vCenter, will this mean that OnApp doesn't work correctly? With it lose sync with the environment? OnApp won’t know about the changes made to VMs directly from vCenter. It will know if you put a HV into Maintenance Mode or shut it down.
Will you ever provide a full integrated billing system? All the existing ones either suck badly or are not supported well/at all. We will continue to improve and expand billing integration with other systems, but we don’t discount the possibility to have a fully integrated internal billing system in the future.
Q&A
About the SAN feature: I assume all the HVs will be connected to a separate SAN switch for the disk transfer traffic. What happens when the SAN switch suddenly dies or is taken offline? As with any storage infrastructure, we’d always recommend you build redundancy into the network platform as well. We support bonding at the hypervisor level, so you can have multiple network paths. We support hardware and software bonding. In terms of the SAN itself, one of its core properties is that it is very robust and tolerant of failure. Each drive has a smart controller that manages the content on that drive, and as you allocate storage objects (you create a customer vDisk) the members that are going to own that content, know about it each other: they are all completely synchronized, and know if any members are out of sync or inaccessible for any reason.
Q&A
What is the list price for SAN software? OnApp Storage will be bundled with the full version of OnApp Cloud v3.0 when it reaches General Availability towards the end of 2012. A standalone version is also planned: pricing for this has not been announced, but it will be likely to feature a pay-as-you-grow model priced per GB of allocated storage.
You mentioned Open vSwitch will be supported in V3... how will this affect currently installed networks? With v3.0 you’ll have the option of continuing with the existing networks, as you do now.
When are you looking to introduce IPv6 support? OnApp Cloud already supports IPv6, but we’re aware that some features are limited: we’re working to bring IPv6 support to the same level as IPv4.
Q&A
There was talk of an update to the WHMCS module by OnApp with v3 - is this true? What is the current status of the WHMCS Module from OnApp? The existing WHMCS module currently works with 3.0 Beta, and we’ll be working closely with WHMCS to keep enhancing that integration in future releases.
If one wanted to still have min 2 x data storage units , could the new storage system still be used on there instead of the limited storage due to drive bays on HV's ? Yes. OnApp Storage can be used to power a separate cluster dedicated to storage only, and not as Hypervisors. Currently those servers would still have to be shown to OnApp as HVs, but in the future they can be a total stand alone system.
Q&A
Q: If we are about to do a new install, is it better to start with the 3.0 version instead of installing 2.x and then dealing with upgrade in a few months? It depends when you want to go live. You can build a production cloud now using the current release candidate (2.3.3). If, however, if you don't plan to go live until 2013, it may be worthwhile trialling the v3.0 beta now, and moving to the GA version when it’s available. We’d recommend that you speak to your sales or account manager to discuss.
Will v3.0 beta allow upgrading to the GA version when it’s available? It is never usually never a good policy to upgrade from a beta environment to a production system. We recommend a clean wipe and re-install when the GA is ready. At the moment there are no plans for an upgrade path from the beta.
Q&A
Does the storage system include/provide support for any form of hypervisor fencing to prevent corruption in the case of a partial hypervisor failure? I.e. will the storage layer sever/refuse connections from a hypervisor if OnApp detects it as being offline, rather than continue to process VM writes both from a partially offline hypervisor, and the online hypervisor on which the VM has been restarted? All storage transactions for a given storage object, such as a virtual disk, involve an atomic protocol across all owning members of that specific content. Before making a virtual disk active we enforce that all members are in-sync. Any member that is not in sync is rejected from the active group, and marks itself as out of sync, requiring update. If any member is not accessible, the SAN ensures via a secondary channel that the member + hypervisor are completely dead, and only in that case will the activation be allowed to proceed. Note that when coming back online, e.g. after a reboot, a member never automatically activates itself; activation can only ever happen as part of a larger atomic operation across the whole set. In short, these basic primitives ensure that writes or reads can never accidentally go to an out of sync member, and once a member gets out of sync, it must be explicitly repaired in order for it to become part of the active set again.
Q&A
Will you implement more detailed CPU stats per each HV? This has been logged as a feature request and will be considered for a future release. Thanks!
Do we have the option to enable/disable CPU Share settings ? The ability to allow a VM to take an entire server (i.e. 100% share) is planned for a future release.
Get Involved! Join the beta:
http://onapp.com/cloud-v3-beta
Support forum: http://forum.onapp.com Feedback:
[email protected]