After a preview offered access to Skype’s live translation tool on the desktop earlier this summer, the feature is rolling out to all users. If you’re in need of a quick refresher, Skype Translator converts video calls in English, French, German, Italian, Mandarin and Spanish and 50 messaging languages inside the Windows app. The company says that the software leverages machine learning, so it’ll only get better as more people use it. In fact, folks who signed up for the preview have already pitched in there. When the tool arrives, you’ll notice a new translator icon in Skype that’ll let you know it’s ready to go to work.
Following Moore’s law is getting harder and harder, especially as existing components reach their physical size limitations. Parts like silicon transistor contacts — the "valves" within a transistor that allow electrons to flow — simply can’t be shrunken any further. However, IBM announced a major engineering achievement on Thursday that could revolutionize how computers operate: they’ve figured out how to swap out the silicon transistor contacts for smaller, more efficient, carbon nanotubes.
The problem engineers are facing is that the smaller silicon transistor contacts get, the higher their electrical resistance becomes. There comes a point where the components simply get too small to conduct electrons efficiently. Silicon has reached that point. But that’s where the carbon nanotubes come in. These structures measure less than 10 nanometers in diameter — that’s less than half the size of today’s smallest silicon transistor contact. IBM actually had to devise a new means of attaching these tiny components. Known as an "end-bonded contact scheme" the 10 nm electrical leads are chemically bonded to the metal substructure. Replacing these contacts with carbon nanotubes won’t just allow for computers to crunch more data, faster. This breakthrough ensures that they’ll continue to shrink, following Moore’s Law, for several iterations beyond what silicon components are capable of.
"These chip innovations are necessary to meet the emerging demands of cloud computing, Internet of Things and Big Data systems," Dario Gil, vice president of Science & Technology at IBM Research, said in a statement. "As technology nears the physical limits of silicon, new materials and circuit architectures must be ready to deliver the advanced technologies that will drive the Cognitive Computing era. This breakthrough shows that computer chips made of carbon nanotubes will be able to power systems of the future sooner than the industry expected." The study will be formally published October 2nd, in the journal Science. This breakthrough follows a number of other recent minimization milestones including transistors that are only 3-atoms thick or constructed from a single atom.
The wait is nearly over, X-Files fans! We’re just a few short months from the debut of the new X-Files. Like the original series, this six episode mini-season is being produced by Chris Carter. David Duchovny and Gillian Anderson reprise their lead roles as Mulder and Scully while many of the show’s supporting cast (like the Cigarette-Smoking Man and FBI Director Skinner) will also reportedly be making appearances. The new series launches on January 24th, immediately following the NFC Championship game.
What you see above may look like an unremarkable slice of electronics, but it can theoretically power a low-energy device forever, and for free. If that sounds like a big deal, well… that’s because it is. Drayson Technologies today announced Freevolt, a system that harvests energy from radio frequency (RF) signals bouncing around in the ether and turns it into usable, "perpetual power." Drayson isn’t exactly a household name, but the research and development company has a particular interest in energy, especially where all-electric racing is concerned. And now it’s developed the first commercial technology that literally creates electricity out of thin air.
We’re constantly surrounded by an ever-denser cloud of RF signals. They’re the reason your smartphone gets 2G, 3G and 4G coverage, your laptop gets WiFi, and your TV receives digital broadcasts. Capturing energy from this background noise is nothing new, but most proof-of-concept scenarios have employed dedicated transmitters that power devices at short ranges. Furthermore, research into the field has never really left the lab, though a company called Nikola Labs is hoping to release an iPhone case that’s said to extend battery life using RF energy harvesting.
According to Drayson, Freevolt is the first commercially available technology that powers devices using ambient RF energy, no dedicated transmitter required. The key to Freevolt is said to be the efficiency of its three constituent parts. A multi-band antenna scavenges RF energy from any source within the 0.5-5GHz range, which is then fed through an "ultra-efficient" rectifier that turns this energy into DC electricity. A power management module boosts, stores and outputs this electricity — and that’s all there is to it.
Freevolt may well be the most efficient system of its kind, but it’s still only viable for devices that require very little power. In a location where lots of RF signals are flying around, like in an office, a standard Freevolt unit can produce around 100 microwatts of power. That’s nowhere near enough to say, run your smartphone, but Drayson has some specific use cases in mind. The company thinks Freevolt can be the backbone of the connected home, and in a broader sense, the internet of things. Sensor-based devices, such as a smart smoke alarm, can be powered by Freevolt indefinitely. Beacons that provide indoor mapping and targeted advertising are also perfect candidates.
While it’s easy to visualize specific examples — a smoke alarm that never needs a new battery, or a low-power security camera that isn’t bound to a mains outlet — the true potential of Freevolt is hard to grasp. We’re talking about free energy here: devices that never need charging, cost nothing to run, and aren’t limited by the location of an external power source. An entire smart city — where roads know when they’re busy and bins know when they’re full — could be devised using countless sensors that require no upkeep, and have no overheads beyond the price of the hardware itself. It’s a powerful idea, and beyond sensors, Drayson imagines Freevolt being used to trickle-charge all kinds of hardware, significantly extending the battery life of a wearable, for instance.
What’s more, Freevolt can be scaled up for applications that require higher power outputs, and Drayson is currently working on miniaturizing its initial reference design and creating a flexible version that can be integrated into clothing, among other things. There are limitations to the technology, of course. The amount of power Freevolt can harness depends on the density of ambient RF signals, which are way more prevalent in urban areas than the countryside. A sensor-based product could still operate in these lower-yield environments, though, by monitoring a value every five minutes instead of every five seconds, for example.
Drayson’s business model involves selling licenses to Freevolt and its related patents, as well as offering guidance and technical support to interested parties. Development kits are also available to pre-order from today, so advanced tinkerers can get their hands on the tech too. It might take some time before Freevolt finds its way into products, as Drayson is relying primarily on other companies to dream up and develop real-world applications. That said, Drayson has created a consumer product of its very own that’s powered solely by Freevolt: an air pollution monitor called CleanSpace.
The CleanSpace Tag is a continuous carbon monoxide monitor that sends data back to your smartphone via Bluetooth. From the companion app, you can see real-time air pollution levels, and review your exposure during that day, recent weeks and further. The app also keep tabs on your travels, encouraging you to build up "CleanMiles" by walking and cycling rather than taking motorized transport. These banked CleanMiles can then be exchanged for rewards provided by partners such as Amazon, incentivizing you to travel in non-polluting ways.
Air pollution is of particular interest to Lord Drayson, chairman and CEO of Drayson Technologies, who hopes to increase awareness of the invisible health risk. But, there’s also a bigger picture. The CleanSpace app uses data from the 110 static sensors dotted around London to build a pollution map of the capital. Each CleanSpace Tag also feeds anonymized data into this system, with the idea being the more tags in the wild, the more locally relevant and robust that UK pollution map can become. CleanSpace users can therefore decide on the fly to avoid more polluted areas in favor of cleaner routes. The plan is to expand the crowdsourced data concept elsewhere if it’s well received, but for now the CleanSpace Tag is only available in the UK through a crowdfunding campaign. Pricing starts at £55 per tag, though you might want to buy one just to rip it open and see the Freevolt backbone hidden inside.
The content below is taken from the original (RiscPCB now Pi-compatible), to continue reading please visit the site. Remember to respect the Author & Copyright.
Terry Swanborough has updated RiscPCB – his application for designing printed circuit boards, which can be saved out as Draw files or industry standard Gerber files – to make it compatible with the Raspberry Pi, and at the same time added a few other features: Up to eight layers can now be used (six copper, […]
Researchers have created the first optical-only chip that can permanently store data, a discovery that could lead to storage devices that leave SSDs in the dust. Non-volatile flash memory currently relies on electronic chips, which are speed-limited by the heat and resistance generated by colliding electrons. Light-based circuits don’t have that problem, but so far "nano-photonic" chips created by the likes of IBM are volatile (need to be powered), making them a non-option for permanent storage. The team from Oxford and the Karlsruhe Institute in Germany managed to solve that problem using a familiar light-based storage medium: DVDs.
Re-writable DVDs and CDs save data using a material called "GST " — an alloy made from germanium, terllurium and antimony — that changes its structure when hit by a laser. The UK and German team built a chip using "waveguide" technology that directs light through channels etched into a silicon-nitride material. The chip was coated with nanoscale GST, then blasted by a high-intensity laser through the waveguide channels. That changed the GST from a consistent crystalline structure into an amorphous blob, which was detected by another low-intensity laser and read out as data.
A nano-photonics chip developed by IBM Research
The GST transforms back to a crystalline state when hit with another high-intensity shot, making for a true rewritable device. By varying the intensity and wavelength of the lasers, the team was even able to store up to 8 bits of data in a single location, a big improvement over binary electronic devices.
While the research is promising, there’s still a lot of work before commercial, light based devices appear. For starters, the chips will have to be hundreds of times smaller before they can compete with flash storage. However, the prototype chip is on par with its electronic counterpart for speed and power consumption, and the technology already exists to make it commercially feasible, according to the team. If paired with photonic logic chips, it could eventually result in computers that are up to 100 times faster than the one you’re using now.
APIs — the rules governing how software programs interact with each other — not user interfaces, will upend software for years to come. When Intel CEO Brian Krzanich doubled down on the Internet of Things at the company’s annual Developer Forum in August, he emphasized what many of us have already known — the dawn of a new era in software engineering. Read More
This article originally appeared on Fast Company and is reprinted with permission.
By Greg Lindsay
I almost didn’t notice I was wearing it, at first. The plastic box strung around my neck was roughly the size and weight of a deck of cards, lighter than I expected. It was only when I spotted the occasional flash of blue light that I remembered this "sociometric badge" was listening to everything I said, where I said it, and to whom—especially if they were wearing a similar device around their own necks. In those cases, our conversations were captured for analysis—ignoring what we said in favor of how long we spoke, and who did all the talking.
I started to turn painfully self-conscious around my first visit to the bathroom: Did the badge know I was in there? Would it listen? Would it freak someone out that I was wearing a giant sensor in the stall next to him? By the time I left the building for lunch, I had zipped it beneath my jacket, less concerned that it was counting my every step than having civilians think I was some new species of Glasshole.
Like Google Glass, sociometric badges were prototyped in Alex "Sandy" Pentland’s Human Dynamics Lab within the MIT Media Lab—a place where his cyborg doctoral students once wore keyboards on their heads and no one thought it strange. Unlike Glass, the badges are still a going concern—five years ago, Pentland and several former students spun out a company now called Humanyze to consult for such companies as Deloitte and Bank of America. Just as Fitbits measure vital signs and REM cycles to reveal hidden truths about their wearers’ health, Humanyze intends to do the same for organizations—only instead of listening to heartbeats, its badges are alert for face-to-face conversations.
For two weeks in April, Fast Company was one of those subjects. (Humanyze provided the badges and analysis for free.) Twenty Fast Company editorial employees—and me, as a visiting observer—agreed to wear the badges whenever we were in the building. Our goal was to discover who actually speaks to whom, and what these patterns suggest about the flow of information, and thus power, through the office. Is the editor in chief really at the center of the magazine’s real-world social network, or was someone else the invisible bridge between its print and online operations? (Or worse, what if the two camps didn’t speak at all?) We would try to find out, though we would be hampered somewhat by the fact that not everyone was wearing a badge, and we didn’t give Humanyze the full range of data, like integration into our email and Slack conversations, that would allow the company to truly understand our work relationships.
More importantly were the questions we chose to not ask: How did these patterns impact performance? Should editors and writers talk less or more, and what did it mean when they talked amongst themselves? Did it result in more posts on Fast Company‘s website, or more highly trafficked ones? Demonstrating and understanding these relationships are what Humanyze’s clients pay for; perhaps we were too scared to learn.
For the better part of two weeks, staff members suffered the badges in silence. Some people found wearing them uncomfortable and awkward. "It was oppressive," says associate news editor Rose Pastore. "I think it ruined my posture." "It does not play well with statement necklaces," says senior editor Erin Schulte, who, like many others, resented needing to wear the badge on her sternum for maximum audio fidelity (and so the infrared sensors that establish the wearers’ identities have a clear line of sight). Several wished it could be a pendant or lapel pin or wristlet—anything less intrusive.
Others complained about the user interface, or lack thereof. The blue twinkling I’d noticed was only one of several colors, none of which had been explained during orientation. Some found this Orwellian; others reported being lulled into complacency by its low-tech appearance and cheap plastic casing. Still more wanted feedback: Was this thing on? Was I doing this right? Cognitive dissonance soon manifested. Writers and editors who complained in one breath about opaque surveillance suggested in the next that only if the badge could replace their Jawbone UPs and Fitbits—in the process capturing their quantified selves for their employer—would the exercise be worthwhile.
Co.Exist editor Morgan Clendaniel took this idea to its logical conclusion, proposing that flat-screens mounted around the office broadcast our interactions in real time, à la the visualizations produced by the likes of Chartbeat, which depicts the performance of individual online stories on a moment-by-moment basis. (A Los Angeles-based startup named Rexter does exactly that.)
Humanyze CEO Ben Waber understands their concerns, from the interference with statement necklaces to the deliberate lack of clarity from the badges. "Lights blinking all the time is distracting," he says. "It’s a difficult line to manage." As far as the wearer’s comfort goes, he’s confident that Moore’s Law will reduce the weight of sociometric badges until they are indistinguishable from standard-issue IDs. (The latest version of the badge, which we did not wear, is half the size of the previous iteration.) But he’s adamant that the badges will always be worn on the chest, as it’s the only way to guarantee conversations will be heard clearly.
Humanyze prides itself on privacy. Several weeks after our badges had been shipped back to Boston for analysis, we each received a link to our individual results. Not only was this data shielded from our employer, we were assured, but Fast Company was also contractually forbidden to ask us what was in our reports. Which explains why Laura Freeman, the "quantitative social scientist" who prepared our reports, was audibly dismayed when I announced my intentions to reverse-engineer them.
In my own case, the results confirmed what I already knew: that I was a marginal figure in the office, which I rarely visit. While I may have spent more time moving and speaking than most participants in order to gin up conversations about the badges, the extent of my connections would be considered subpar at best. ("You can increase your face-to-face network breadth by making an effort to meet new colleagues," my results helpfully suggested.)
Nick Gerasimatos, senior director of cloud services engineering at FICO, dives into the lack of persistent storage with containers and how Docker volumes and data containers provide a fix.
OpenStack is always interested in feedback and community contributions, if you would like to see a new section in the OpenStack Weekly Community Newsletter or have ideas on how to present content please get in touch: [email protected].
Robert Collins raises that while the constraints system in place for how we recognize incompatible components in our release is working, the release team needs help from the community to fix the incompatibility that exists so we can cut the full Liberty release.
Robert Collins says currently we don’t provide guidance on what happens when the only changes in a project are dependency changes and a release is made.
Today the release team treats dependency changes as a “feature” rather than a bug fix. (e.g. if the previous release 1.2.3, requirement sync happens, the next version is 1.3.0.)
Reasons behind this are complex, some guidance is needed to answer the questions:
Is this requirements change an API break?
Is this requirements change feature work?
Is this requirements change a bug fix?
All of these questions can be true. Some examples:
If library X exposes library Y as part of its API, and library Y’s dependency changes from Y>=1 to Y>=2. X does this because it needs a feature from Y==2.
Library Y is not exposed in library X’s API, however, a change in Y’s dependencies for X will impact users who independently use Y. (ignoring intricacies surrounding PIP here.)
Proposal:
nothing -> a requirement -> major version change
1.x.y -> 2.0.0 -> major version change
1.2.y -> 1.3.0 -> minor version change
1.2.3. -> 1.2.4 -> patch version change
Thierry Carrez is ok with the last two proposals. Defaulting to a major version bump sounds a bit overkill.
Doug Hellmann reminds that we can’t assume the dependency is using semver itself. We would need something other than the version number to determine from the outside whether the API is in fact breaking.
Due this problem being so complicated, Doug would rather over-simplify the analysis of requirements updates until we’re better at identifying our own API breaking changes and differentiating between features and bug fixes. This will allow us to be consistent, if not 100% correct.
The vulnerability management processes were brought to the big tent a couple of months ago [4].
Initially we listed what repos the Vulnerability Manage Team (VMT) tracks for vulnerabilities.
TC decided to change this from repos to deliverables as per-repo tags were decided against.
Jeremy Stanley provides transparency for how deliverables can qualify for this tag:
All repos in a given deliverable must qualify. If one repo doesn’t, they all don’t in a given deliverable.
Points of contact:
Deliverable must have a dedicated point of contact.
The VMT will engage with this contact to triage reports.
A group of core reviewers should be part of the <project>-corsec team and will:
Confirm whether a bug is accurate/applicable.
Provide pre-approval of patches attached to reports.
The PTLs for the deliverable should agree to act as (or delegate) a vulnerability management liaison to escalate for the VMT.
The bug tracker for the repos within a deliverable should have a bug tracker configured to initially provide access to privately reported vulnerabilities initially to the VMT.
The VMT will determine if the vulnerability is reported against the correct deliverable and redirect when possible.
The deliverable repos should undergo a third-party review/audit looking for obvious signs of insecure design or risky implementation.
This aims to keep the VMT’s workload down.
It has not been identified who will perform this review. Maybe the OpenStack Security project team?
While a bug [6] was being debugged, an issue was identified where an API sitting behind a proxy performing SSL termination would not generate the right redirection (http instead of https).
A review [7] has been given to have a config option ‘secure_proxy_ssl_header’ which allows the API service to detect ssl termination based on the header X-Forwarded-Proto.
Another bug back in 2014 was open with the same issue [8].
Several projects applied patches to fix this issue, but are inconsistent:
Ben Nemec comments that solving this at the service level is the wrong place, due to this requiring changes in a bunch of different API services. Instead it should be fixed in the proxy that’s converting the traffic to http.
Sean Dague notes that this should be done in the service catalog. Service discovery is a base thing that all services should use in talking to each other. There’s an OpenStack spec [9] in an attempt to get a handle on this
Mathieu Gagné notes that this won’t work. There is a “split view” in the service catalog where internal management nodes have a specific catalog and public nodes (for users) have a different one.
Suggestion to use oslo middleware SSL for supporting the ‘secure_proxy_ssl_header’ config to fix the problem with little code.
Sean agrees the split view needs to be considered, however, another layer of work shouldn’t decide if the service catalog is a good way to keep track of what our service urls are. We shouldn’t push a model where Keystone is optional.
Sean notes that while the ‘secure_proxy_ssl_header’ config solution supports the cases where there’s a 1 HA proxy with SSL termination to 1 API service, it may not work in the cases where there’s a 1 API service to N HA Proxies for:
Clients needing to understand the “Location:” headers correctly.
Libraries like request/phatomjs can follow the links provided in REST documents, and they’re correct.
The minority of services that “operate without keystone” as an option are able to function.
ZZelle mentions this solution does not work in the cases when the service itself acts as a proxy (e.g. nova image-list).
Would this solution work in the HA Proxy case where there is one terminating address for multiple backend servers?
Yes, by honoring the headers X-Forwarded-Host and X-Forwarded-Port which are set by HTTP proxies, making WSGI applications unaware of the fact that there is a request in front of them.
Jamie Lennox says this same topic came up as a block in a Devstack patch to get TLS testing in the gate with HA Proxy.
Longer term solution, transition services to use relative links.
This is a pretty serious change. We’ve been returning absolute URLs forever, so assuming that all client code out there would with relative code is a big assumption. That’s a major version for sure.
Sean agrees that we have enough pieces to get something better with proxy headers for Mitaka. We can do the remaining edge cases if clean up the service catalog use.
Tagging efforts for diversity and deprecation policies
Since tagging projects for a percentage of diverse affiliations, we are also discussing the idea of an inverse tag to indicate non-diversity. Some of us on the TC are unsure that a lack of diversity is an indicator that the project isn’t useful or successful, especially in the early days of a project’s maturation. Others would like to indicate that a lack of diversity could mean support would be easy to pull.
We’ve passed a “follows-standard-deprecation” tag that projects can apply to in order to indicate their deprecation policies follow the standard for all OpenStack projects. No projects are asserting following it yet, but we want to make sure the community knows we’ve written the policies for configuration option and potential feature deprecation.
Code of Conduct (CoC)
Cindy Pallares reached out to the Technical Committee with a proposal to improve OpenStack’s current CoC. After reviewing CoCs from other communities and listening to the feedback provided by Cindy and other members, the OpenStack’s CoC will be updated and improved to make it organized by context, such as online or in events. Please, do stay in touch and follow this discussion closely as it’ll have an impact on the whole community. We do have Code of Conducts in place for both contexts, but we actively review these as the community grows, diversifies, and matures to ensure it meets the needs of all members.
Considering additional programming languages
There’s a new resolution defining how projects written in other programming languages should be evaluated. The resolution talks about how we mostly contain and plan for Python projects with JavaScript (Dashboard) and bash (DevStack) also enabled in time. The discussion started a few meetings ago where things like big tent, community impact, infrastructure impact, technology impact were highlighted and discussed at a high level. Since this topic impacts the whole community, we appreciated the input we got and welcome all to read and understand the resolution. We came to a conclusion that we do consider additional languages but need to ensure common process and tooling for infrastructure, testing, and documentation as part of the larger picture especially for OpenStack services.
Handling project teams with no candidate PTLs
We got “our garumphs out” over time zone confusion with the recent candidacy round and approved PTLs for these projects:
Security: Robert Clark
Key Manager (barbican): Douglas Mendizabal
Application Catalog (murano): Serg Melikyan
For the Containers (magnum) project, the two candidates Hongbin Lu and Adrian Otto agreed to an election to resolve a timing problem with the candidate submissions. The election officials agreed they could run another PTL election just for magnum, so look for that ballot in your inbox if you worked on the magnum codebase in the last six months.
MagnetoDB didn’t receive any candidacies. Unfortunately, this project hasn’t received contributions in a while and it’s being considered for removal from the Big Tent. Read more about the current discussion on the review itself.
As a reminder, our charter currently states, “Voters for a given project’s PTL election are the active project contributors (“APC”), which are a subset of the Foundation Individual Members. Individual Members who committed a change to a repository of a project over the last two 6-month release cycles are considered APC for that project team.” The names of repositories of projects are kept in the projects.yaml file in the openstack/governance repository.
Applications incoming and welcoming
As always we are busy reviewing incoming applications to OpenStack governance.
The Monasca project has been asked to keep working on their open processes and keep their application alive in the queue. Three items of feedback for their consideration are: 1) Integration tests should be running as a gate job with OpenStack CI tools, using devstack as a bootstrap. 2) Host the source in gerrit (review.openstack.org) so that all components and tests are well-understood. 3) Better integration with the rest of the community, using more patterns of communication and doing cross-project liaison work.
We discussed the Kosmos project application, a very new project, formed initially from members of Designate and Neutron LBaaS, to provide global server load balancing for OpenStack clouds. A few of the TC members would prefer to see more evidence of their work, others think that the new definition of working like OpenStack should enable them to apply and be accepted.
We are thinking about the CloudKitty application and Juji Charms for Ubuntu application to OpenStack governance and will consider at the next TC meeting. As guidance for timing, we add motions presented before Friday 0800 UTC to the next Tuesday meeting agenda for discussion.
Cross-project Track
At the upcoming Mitaka summit, the community will have a dedicated track for cross-project discussion. The period for proposals is now open and it’ll be until October 9th. It’s possible to propose sessions on ODSREG. More info can be found on this thread.
Popp have released a new smart home gateway that allows you to control your home from your smartphone or tablet via the accompanying free iOS and Android apps. The gateway is based on the Raspberry Pi and configured through a web browser via its Ethernet port. It will manage up to 230 Z-Wave modules, as well as network cameras and other IP based […]
Today, Citrix announced that the company has been named as a leader in the Forrester Research, Inc. report, The Forrester Wave: Server-Hosted Virtual Read more at VMblog.com.
Making a cellphone is easy. You go into a mine, pull up some ore, extract various metals and then add components that you manufacture from other mines. Then you have to get FCC clearance and create lithium ion battery. Finally, you need to write a Snake game. If you can’t do that, try RePhone.
The project is actually a tiny circuit board with a SIM slot and an optional screen. It also… Read More
Most 3D prints have the same ridged, textured finish to them due to the nature of extruders. Find out how to customize your prints with these techniques.
If you are a full-time contributor, please consider sharing your time, knowledge and experience to make our community more diverse and you’ll have the opportunity to meet new talents. Ask for further directions in #OpenStack-opw on Freenode.
Rob Hirschfeld, co-chair of the DefCore committee, shares more on DefCore, which defines capabilities, code and must-pass tests, creating the minimum standards for products labeled OpenStack
Five projects don’t have candidates. According to OpenStack governance, the TC will appoint the new PTL [1].
Barbican
MagnetoDB
Magnum
Murano
Security
Seven projects will have an election:
Cinder
Glance
Ironic
Keystone
Mistral
Neutron
Oslo
There was confusion in UTC and how to submit nominations through Gerrit, but the TC will work with those candidates in Magnum, Barbican, Murano, Security.
Doug Hellmann says MagnetoDB will be discussed for removal due to inactivity. [1]
From conversations at the Ops Midcycle meetup and email threads with regards to Glance issues, Doug Hellmann put together a list of proposed priorities for the Glance team: Focus attention on DefCore:
DefCore goals: Ensure all OpenStack deployments are interoperable at REST level (users can write software for one OpenStack cloud and move to another without changes to the code).
Provide a well documented API with arguments that don’t change based on deployment choices.
Integration tests in Tempest that test Glance’s API directly, in addition to the the current tests that proxy through Nova and Cinder.
Once incorporated into DefCore, the APIs need to remain stable for an extended period of time, and follow deprecation timelines defined by complete V2 adoption in Nova and Cinder.
In Nova, some specs didn’t land in Liberty. Both teams need to work together.
In Cinder, the work is more complete, but needs to be reviewed that the API is used correctly.
Security audits and bug fixes
5 out of the 18 recent security reports were related to Glance [2]
Two ways to upload images to Glance V2:
1) POST image bits to Glance API server.
Not widely deployed. Potential DOS vector.
2) Task API, to have Glance download it asynchronously.
Not widely deployed.
Assumes you know what task “types” are supported by which cloud, and the expected arguments (i.e. JSON blob). (e.g. Glance docs give a url for a source, but Rackspace gives a Swift location as a source).
Observed with 5 public clouds, requiring you to use a floating IP to get an outbound address. Others directly attach you to the public network.
Some allow you to create a private network and attach virtual machines toit, create a router with a gateway.
Monty wants an easier way to have a virtual machine on the external facing network of a cloud. Users shouldn’t have
to learn about how to make that work with floating tips. This should be consistent behavior across public clouds. There is an effort set for Mitaka to work on Monty’s request [3]. This will be done for ‘nova boot’ and work with multiple networks.
If you have a more complicated network setup, this spec isn’t for you.
Thierry proposes a standard way to communicate and perform removal of user-visible behaviors and capabilities.
We sort of have something today, but not written of “to remove a feature, you mark it deprecated for n releases, then remove it”.
Tag proposed [4].
We need to survey existing projects to see what their deprecation policy is.
Proposed options for deprecation period:
n+2 for features and capabilities, n+1 for config options
n+1 for everything
n+2 for everything
Ben Swartzlander thinks this discussion also needs to cover long term support (LTS).
Fungi thinks this is premature. Icehouse stable branch made it to 14 months before it was dropped due to not enough effort was given to keep it working.
It was agreed “config options and features will have to be marked deprecated for a minimum of one stable release branch and a minimum of 3 months”.
Josh Harlow is concerned that most projects start off small and not diverse, and this tag [5] would create negative connotations for those projects.
Thierry raises it’s important to see the intent of the tag, rather by it’s name.
The tag system is there to help our ecosystem navigate the big tent by providing bits of information.
Example of information: how risky is it to invest on a given project?
Some projects are dependent on a single company and can disappear in one day by the CEO’s decision.
For this reason, Thierry supports describing project teams that are *extremely* fragile.
As a result, the big tent is more inclusive. On the flip side, we need to inform our ecosystem that some project are less mature. Otherwise, you’re hiding this information.