The content below is taken from the original (OpenStack Developer Mailing List Digest April 9-22), to continue reading please visit the site. Remember to respect the Author & Copyright.
Success Bot Says
- Clarkb: infra team redeployed Gerrit on a new larger server. Should serve reviews with fewer 500 errors.
- danpb: wooohoooo, finally booted a real VM using nova + os-vif + openvswitch + privsep
- neiljerram: Neutron routed networks spec was merged today; great job Carl + everyone else who contributed!
- Sigmavirus24: Hacking 0.11.0 is the first release of the project in over a year.
- Stevemar: dtroyer just released openstackclient 2.4.0 – now with more network commands \o/
- odyssey4me: OpenStack-Ansible Mitaka 13.0.1 has been released!
- All
One Platform – Containers/Bare Metal?
- From the unofficial board meeting [1], an interest topic came up of how to truly support containers and bare metal under a common API with virtual machines.
- We want to underscore how OpenStack has an advantage by being able to provide both virtual machines and bare metal as two different resources, when the “but the cloud should sentiment arises.
- The discussion around “supporting containers” was different and was not about Nova providing them.
- Instead work with communities on making OpenStack the best place to run things like Kubernetes and Docker swarm.
- We want to be supportive of bare metal and containers, but the way we want to be supportive is different for
- In the past, a common compute API was contemplated for Magnum, however, it was understood that the API would result in the lowest common denominator of all compute types, and exceedingly complex interface.
- Projects like Trove that want to offer these compute choices without adding complexity within their own project can utilize solutions with Nova in deploying virtual machines, bare metal and containers (libvirt-lxc).
- Magnum will be having a summit session [2] to discuss if it makes sense to build a common abstraction layer for Kubernetes, Docker swarm and Mesos.
- There are expressed opinions that both native APIs and LCD APIs can co-exist.
- Trove being an example of a service that doesn’t need everything a native API would give.
- Migrate the workload from VM to container.
- Support hybrid deployment (VMs & containers) of their application.
- Bring containers (in Magnum bays) to a Heat template, and enable connections between containers and other OpenStack resources
- Support containers to Horizon
- Send container metrics to Ceilometer
- Portable experience across container solutions.
- Some people just want a container and don’t want the complexities of others (COEs, bays, baymodels, etc.)
- Full thread
Delimiter, the Quota Management Library Proposal
- At this point, there is a fair amount of objections to developing a service to manage quotas for all services. We will be discussing the development of a library that services will use to manage their own quotas with.
- You don’t need a serializable isolation level. Just use a compare-and-update with retries strategy. This will prevent even multiple writers from oversubscribing any resource with an isolation level.
- The “generation” field in the inventories table is what allows multiple writer to ensure a consistent view of the data without needing to rely on heavy lock-based semantics in relational database management systems.
- Reservation doesn’t belong in quota library.
- Reservations is concept of a time to claim of some resource.
- Quota checking is returning whether a system right now can handle a request right now to claim a set of resources.
- Key aspects of the Delimiter Library:
- It’s a library, not a service.
- Impose limits on resource consumptions.
- Will not be responsible for rate limiting.
- Will not maintain data for resources. Projects will take care of keeping/maintaining data for the resources and resource consumption.
- Will not have a concept of reservations.
- Will fetch project quota from respective projects.
- Will take into consideration of a project being flat or nested.
- Delimiter will rely on the concept of generation-id to guarantee sequencing. Generation-id gives a point in time view of resource usage in a project. Project consuming delimiter will need to provide this information while checking or consuming quota. At present Nova [3] has the concept of generation-id.
- Full thread
Newton Release Management Communication
- Volunteers filling PTL and liaison positions are responsible for ensuring communication between project teams happen smoothly.
- Email, for announcements and asynchronous communication.
- The release team will use the “[release]” topic tag in the openstack-dev mailing list.
- Doug Hellmann will send countdown emails with weekly updates on:
- focuses
- tasks
- important upcoming dates
- Configure your mail clients accordingly so that these messages are visible.
- IRC, for time-sensitive interactions.
- You should have an IRC bouncer setup and made available in the #openstack-release channel on freenode. You should definitely be in there during deadline periods (the week before the week of each deadline).
- Written documentation, for relatively stable information.
- The release team has published the schedule for the Newton cycle [4].
- If your project has something unique to add to the release schedule, send patches to the openstack/release repository.
- Please ensure the release liaison for your project hasthe time and ability to handle the communication necessary to manage your release.
- Our release milestones and deadlines are date-based, not feature-based. When the date passes, so does the milestone. If you miss it, you miss it. A few projects ran into problems during Mitaka because of missed communications.
- Full thread
OpenStack Client Slowness
- In profiling the nova help command, it was noticed there was a bit of time spent in the pkg_resource module and it’s use of pyparsing. Could we avoid a startup penalty by not having to startup a new python interpreter for each command we run?
- In tracing Devstack today with a particular configuration, it was noticed that the openstack and neutron command run 140 times. If each one of those has a 1.5s overhead, we could potentially save 3 ½ minutes off Devstack execution time.
- As a proof of concept Daniel Berrange created an openstack-server command which listens on a unix socket for requests and then invokes OpenStackShell.run or OpenStackComputeShell.main or NeutronShell.run. The nova, neutron and openstack commands would then call to this openstack-server command.
- Devstack results without this tweak:
- real 21m34.050s
- user 7m8.649s
- sys 1m57.865s
- Destack results with this tweak:
- real 17m47.059s
- user 3m51.087s
- sys 1m42.428s
- Some notes from Dean Troyer for those who are interested in investigating this further:
- OpenStack Client does not any project client until it’s actually needed to make a rest call.
- Timing on a help command includes a complete scan of all entry points to generate the list of commands.
- The –time option lists all REST calls that properly go through our TimingSession object. That should all of them unless a library doesn’t use the session it is given.
- Interactive mode can be useful to get timing on just the setup/teardown process without actually running a command.
- Full thread
Input Needed On Summit Discussion About Global Requirements
- Co-installability of big tent project is a huge cost in energy spent. Service isolation with containers, virtual environments or different hosts allow avoiding having to solve this problem.
- All-in-one installations today for example are supported because of development environments using Devstack.
- Just like with the backwards compatibility library and client discussion, OpenStack service co-existing on the same host may share the same dependencies. Today we don’t guarantee things will work if you upgrade Nova to Newton and it upgrades shared client/libraries with Cinder service at Mitaka.
- Devstack using virtual environments is pretty much already there. Due to operator feedback, this was stopped.
- Traditional distributions rely on the community being mindful of shared dependency versions across services, so that it’s possible to use apt/yum tools to install OpenStack easily.
- According to the 2016 OpenStack user survey, 56% of deployments are using “unmodified packages from the operating systems”. [4]
- Other distributions are starting to support container-based packages where one version of a library at a time will go away.
- Regardless the benefit of global requirements [5] will provide us a mechanism to encourage dependency convergence.
- Limits knowledge required to operate OpenStack.
- Facilitates contributors jumping from one code base to another.
- Checkpoint for license checks.
- Reduce overall security exposure by limiting code we rely on.
- Some feel this is a regression to the days of not having reliable packaging management. Containers could be lagging/missing critical security patches for example.
- Regardless the benefit of global requirements [5] will provide us a mechanism to encourage dependency convergence.
- Full thread