FAQ: Evolving the OpenStack Design Summit

The content below is taken from the original (FAQ: Evolving the OpenStack Design Summit), to continue reading please visit the site. Remember to respect the Author & Copyright.

Please join us for a community town hall on May 25 at 11:30 UTC or 19:00 UTC (to cover as many timezones as possible) to talk through the plans, answer questions and provide your input.

As a result of community discussion, the OpenStack Foundation is evolving the format of the events it produces for the community starting in 2017. The proposal is to split the current Design Summit, which is held every six months as part of the main OpenStack Summit, into two parts: a “Forum” at the main Summit for cross-community discussions and user input (we call this the “what” discussions), and a separate “Project Teams Gathering” event for project team members to meet and get things done (the “how” discussions and sprinting). The intention is to alleviate the need for a separate mid-cycle, so development teams would continue to meet four times per year, twice with the community at large and twice in a smaller, more focused environment. The release cycle would also shift to create more space between the release and Summit. The change triggered a lot of fears and questions — the intent of this FAQ is to try to address them.

 

Q: How is the change helping upstream developers?

 

A: During the Summit week, upstream developers have a lot of different goals. We leverage the Summit to communicate new things (give presentations), learn new things (attend presentations), get feedback from users and operators over our last release, gather pain points and priorities for upcoming development, propose changes and see what the community thinks of them, recruit and on-board new team members, have essential cross-project discussions, meet with our existing project team members, kickstart the work on the new cycle, and get things done. There is just not enough time in 4 or 5 days to do all of that, so we usually drop half of those goals. Most will skip attending presentations. Some will abandon the idea of presenting. Some will drop cross-project discussions, resulting in them not having the critical mass of representation to actually serve their purpose. Some will drop out of their project team meeting to run somewhere else. The time conflicts make us jump between sessions, resulting in us being generally unavailable for listening to feedback, pain points, or newcomers. By the end of the week we are so tired we can’t get anything done. We need to free up time during the week. There are goals that can only be reached in the Summit setting, where all of our community is represented — we should keep those goals in the Summit week. There are goals that are better reached in a distraction-free setting — we should organize a separate event for them.


Q: What is the “Forum” ?

 

A: “Forum” is the codename for the part of the Design Summit (Ops+Devs) that would still happen at the main Summit event. It will primarily be focused on strategic discussions and planning for the next release (the “what”), essentially the start of the next release cycle even though development will not begin for another 3 months. We should still take advantage of having all of our community (Devs, Ops, End users…) represented to hold cross-community discussions there. That means getting feedback from users and operators over specific projects in our last release, gathering pain points and priorities for upcoming development, proposing changes and see what the community thinks of them, and recruiting and on-boarding new team members. We’d like to do that in a neutral space (rather than have separate “Ops” and “Dev” days) so that the discussion is not influenced by who owns the session. This event would happen at least two months after the previous release, to give users time to test and bring valuable feedback.


Q: What is the “Project Teams Gathering” ?

 

A: “Project Teams Gathering” is the codename for the part of the Design Summit that will now happen as a separate event. It will primarily provide space for project teams to make implementation decisions and start development work (the “how”). This is where we’d have essential cross-project discussions, meet with our existing project team members, generate shared understanding, kickstart the development work on the new cycle, and generally get things done. OpenStack project teams would be given separate rooms to meet for one or more days, in a loose format (no 40-min slots). If you self-identify as a member of a specific OpenStack project team, you should definitely join. If you are not part of a specific project team (or can’t pick one team), you could still come but your experience of the event would likely not be optimal, since the goal of the attendees at this event is to get things done, not listen to feedback or engage with newcomers. This event would happen around the previous release time, when developers are ready to fully switch development work to the new cycle.


Q: How is the change helping OpenStack as a whole?

 

A: Putting the larger Summit event further away from last release should dramatically improve the feedback loop. Currently, calling for feedback at the Summit is not working: users haven’t had time to use the last release at all, so most of the feedback we collect is based on the 7-month old previous release. It is also the wrong timing to push for new features: we are already well into the new cycle and it’s too late to add new priorities to the mix. The new position of the “Forum” event with respect to the development cycle should make it late enough to get feedback from the previous release and early enough to influence what gets done on the next cycle. By freeing up developers time during the Summit week, we also expect to improve the Summit experience for all attendees: developers will be more available to engage and listen. The technical content at the conference will also benefit from having more upstream developers available to give talks and participate in panels. Finally, placing the Summit further away from the release should help vendors prepare and announce products based on the latest release, making the Summit marketplace more attractive and relevant.

 

Q: When will the change happen ?

 

A: Summits are booked through 2017 already, so we can’t really move them anytime soon. Instead, we propose to stagger the release cycle. There are actually 7 months between Barcelona and Boston, so we have an opportunity there to stagger the cycle with limited impact. The idea would be to do a 5-month release cycle (between October and February), place our first project teams gathering end-of-February, then go back to 6-month cycles (March-August) and have the Boston Summit (and Forum) in the middle of it (May). So the change would kick in after Barcelona, in 2017. That gives us time to research venues and refine the new event format.


Q: What about mid-cycles ?

 

A: Mid-cycle sprints were organized separately by project teams as a way to gather team members and get things done. They grew in popularity as the distractions at the main Summit increased and it became hard for project teams to get together, build social bonds and generally be productive at the Design Summit. We hope that teams will get back that lost productivity and social bonding at the Project Teams Gathering, eliminating the need for separate team-specific sprints. 


Q: This Project Teams Gathering thing is likely to be a huge event too. How am I expected to be productive there? Or to be able to build social bonds with my small team?

 


A: Project Teams Gatherings are much smaller events compared to Summits (think 400-500 people rather than 7500). Project teams are placed in separate rooms, much like a co-located midcycle sprint. The only moment where everyone would meet would be around lunch. There would be no evening parties: project teams would be encouraged to organize separate team dinners and build strong social bonds.


Q: Does that new format actually help with cross-project work?


A: Cross-project work was unfortunately one of the things a lot of attendees dropped as they struggled with all the things they had to do during the Summit week. Cross-project workshops ended up being less and less productive, especially in getting to decisions or work produced. Mid-cycle sprints ended up being where the work can be done, but them being organized separately meant it is very costly for a member of a cross-project team (infrastructure, docs, QA, release management…) to attend them all. We basically set up our events in a way that made cross-project work prohibitively expensive, and then wondered why we had so much trouble recruiting people to do it. The new format ensures that we have a place to actually do cross-project work, without anything running against it, at the Project Teams Gathering. It dramatically reduces the number of places a Documentation person (for example) needs to travel to get some work done in-person with project team members. It gives project team members in vertical teams an option to break out of their silo and join such a cross-project team. It allows us to dedicate separate rooms to specific cross-project initiatives, beyond existing horizontal teams, to get specific cross-project work done.


Q: Are devs still needed at the main Summit?

 

A: Upstream developers are still very much needed at the main Summit. The Summit is (and always was) where the feedback loop happens. All project teams need to be represented there, to engage in planning, collect the feedback on their project, participate in cross-community discussions, reach out to new people and on-board new developers. We also very much want to have developers give presentations at the conference portion of the Summit (we actually expect that more of them will have free time to present at the conference, and that the technical content at the Summit will therefore improve). So yes, developers are still very much needed at the main Summit.


Q: My project team falls apart if the whole team doesn’t meet in person every 3 months. We used to do that at the Design Summit and at our separate mid-cycle project team meeting. I fear we’ll lose our ability to all get together every 3 months.

 

A: As mentioned earlier, we hope the Project Teams Gathering to be a lot more productive than the current Design Summit, reducing the need for mid-cycle sprints. That said, if you really still need to organize a separate mid-cycle sprint, you should definitely feel free to do so. We plan to provide space at the main Summit event so that you can hold mid-cycle sprints there and take advantage of the critical mass of people already around. If you decide to host a mid-cycle sprint, you should communicate that your team mid-cycle will be co-located with the Summit and that team member attendance is strongly encouraged.


Q: We are a small team. We don’t do mid-cycles currently. It feels like that with your change, we’ll have to travel to two events per cycle instead of one.

 

A: You need to decide if you feel the need to get the team all together to get some work done. If you do, you should participate (as a team) to the Project Teams Gathering. If you don’t, your team should skip it. The PTL and whoever is interested in cross-project work in your team should still definitely come to the Project Teams Gathering, but you don’t need to get every single team member there as you would not have a team room there. In all cases, your project wants to have some developers present at the Summit to engage with the rest of the community.


Q: The project I’m involved with is mostly driven by a single vendor, most of us work from the same office. I’m not sure it makes sense for all of us to travel to a remote location to get some work done !

 

A: You are right, it doesn’t. We’ll likely not provide specific space at the Project Teams Gathering for single-vendor project teams. The PTL (and whoever else is interested) should probably still come to the Project Teams Gathering to participate in cross-project work. And you should also definitely come to the Summit to engage with other organizations and contributors and increase your affiliation diversity to the point where you can take advantage of the Project Teams Gathering.


Q: I’m a translator, should I come to the Project Teams Gathering?

 


A: The I18n team is of course free to meet at the Project Teams Gathering. However, given the nature of the team (large number of members, geographically-dispersed, coming from all over our community, ops, devs, users), it probably makes sense to leverage the Summit to get translators together instead. The Summit constantly reaches out to new communities and countries, while the Project Teams Gathering is likely to focus on major developer areas. We’ll likely get better outreach results by holding I18n sessions or workshops at the “Forum” instead.


Q: A lot of people attend the current Design Summit to get a peek at how the sausage is made, which potentially results in getting involved. Doesn’t the new format jeopardize that on-boarding?

 

A: It is true that the Design Summit was an essential piece in showing how open design worked to the rest of the world. However that was always done at the expense of existing project team members productivity. Half the time in a 40-min session would be spent summarizing the history of the topic to newcomers. Lively discussions would be interrupted by people in the back asking that participants use the mike. We tried to separate fishbowls and workrooms at the Design Summit, to separate discussion/feedback sessions from team-members work sessions. That worked for a time, but people started working around it, making some work rooms look like overcrowded fishbowl rooms. In the end that makes up for a miserable experience for everyone involved and created a lot of community tension. In the new format, the “Forum” sessions will still allow people to witness open design at work, and since those are specifically set up as listening sessions (rather than “get things done” sessions), we’ll take time to engage and listen. We’ll free up time for specific on-boarding and education activities. Fewer conflicts during the week means we won’t be always running to our next sessions and will likely be more available to reach out to others in the hallway track.


Q: What about the Ops midcycle meetup?

 

A: The Ops meetups are still happening, and for the next year or two probably won’t change much at all. In May, the “Ops Meetups Team” was started to answer the questions about the future of the meetups, and also actively organize the upcoming ones. Part of that team’s definition: “Keeping the spirit of the ops meetup alive” – the meetups are run by ops, for ops and will continue to be. If you have interest, join the team and talk about the number and regional location of the meetups, as well as their content.


Q: What about ATC passes for the Summit?

 

A: The OpenStack Foundation gave discounted passes to a subset of upstream contributors (not all ATCs) who contributed in the last six months, so that they could more easily attend the Summit. We’ll likely change the model since we would be funding a second event, but will focus on minimizing costs for people who have to travel to both the Summit and the Project Teams Gathering. The initial proposal is to charge a minimal fee for the Project Teams Gathering (to better gauge attendance and help keep sponsorship presence to a minimum), and then anyone who was physically present at the Project Teams Gathering would receive a discount code to attend the next Summit. Something similar is also being looked into for contributors represented by the User Committee (eg. ops). At the same time, we’ll likely beef up the Travel Support Program so that we can get all the needed people at the right events.

 

If you have additional questions in mind, please join us for the virtual town hall next week and email them to [email protected] or [email protected] to make sure we address them during the session. We will also make the recording available for those who cannot attend.

X1 Instances for EC2 – Ready for Your Memory-Intensive Workloads

The content below is taken from the original (X1 Instances for EC2 – Ready for Your Memory-Intensive Workloads), to continue reading please visit the site. Remember to respect the Author & Copyright.

Many AWS customers are running memory-intensive big data, caching, and analytics workloads and have been asking us for EC2 instances with ever-increasing amounts of memory.

Last fall, I first told you about our plans for the new X1 instance type. Today, we are announcing availability of this instance type with the launch of the x1.32xlarge instance size. This instance has the following specifications:

  • Processor: 4 x Intel™ Xeon E7 8880 v3 (Haswell) running at 2.3 GHz – 64 cores / 128 vCPUs.
  • Memory: 1,952 GiB with Single Device Data Correction (SDDC+1).
  • Instance Storage: 2 x 1,920 GB SSD.
  • Network Bandwidth: 10 Gbps.
  • Dedicated EBS Bandwidth: 10 Gbps (EBS Optimized by default at no additional cost).

The Xeon E7 processor supports Turbo Boost 2.0 (up to 3.1 GHz), AVX 2.0AES-NI, and the very interesting (to me, anyway) TSX-NI instructions. AVX 2.0 (Advanced Vector Extensions) can improve performance on HPC, database, and video processing workloads; AES-NI improves the speed of applications that make use of AES encryption. The new TSX-NI instructions support something cool called transactional memory. The instructions allow highly concurrent, multithreaded applications to make very efficient use of shared memory by reducing the amount of low-level locking and unlocking that would otherwise be needed around each memory access.

If you are ready to start using the X1 instances in the US East (Northern Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore), or Asia Pacific (Sydney) Regions, please request access and we’ll get you going as soon as possible. We have plans to make the X1 instances available in other Regions and in other sizes before too long.

3-year Partial Upfront Reserved Instance Pricing starts at $3.970 per hour in the US East (Northern Virginia) Region; see the EC2 Pricing page for more information. You can purchase Reserved Instances and Dedicated Host Reservations today; Spot bidding is on the near-term roadmap.

Here are some screen shots of an x1.32xlarge in action. lscpu shows that there are 128 vCPUs spread across 4 sockets:

On bootup, the kernel reports on the total accessible memory:

The top command shows a huge number of running processes and lots of memory:

Ready for Enterprise-Scale SAP Workloads
The X1 instances have been certified by SAP for production workloads. They meet the performance bar for SAP OLAP and OLTP workloads backed by SAP HANA.

You can migrate your on-premises deployments to AWS and you can also start fresh. Either way, you can run S/4HANA, SAP’s next-generation Business Suite, as well as earlier versions.

Many AWS customers are currently running HANA in scale-out fashion across multiple R3 instances. Many of these workloads can now be run on a single X1 instance. This configuration will be simpler to set up and less expensive to run. As I mention below, our updated SAP HANA Quick Start will provide you with more information on your configuration options.

Here’s what SAP HANA Studio looks like when run on an X1 instance:

You have several interesting options when it comes to disaster recovery (DR) and high availability (HA) when you run your SAP HANA workloads on an X1 instance. For example:

  • Auto Recovery – Depending on your RPO (Recovery Point Objective) and RTO (Recovery Time Objective), you may be able to use a single instance in concert with EC2 Auto Recovery.
  • Hot Standby – You can run X1 instances in 2 Availability Zones and use HANA System Replication to keep the spare instance in sync.
  • Warm Standby / Manual Failover – You can run a primary X1 instance and a smaller secondary instance configured to persist only to permanent storage.  In the event that a failover is necessary, you stop the secondary instance, modify the instance type to X1, and reboot. This unique, AWS-powered option will give you quick recovery while keeping costs low.

We have updated our HANA Quick Start as part of today’s launch. You can get SAP HANA running in a new or existing VPC within an hour using a well-tested configuration:

The Quick Start will help you to configure the instance and the associated storage, install the requisite operating system packages, and to install SAP HANA.

We have also released a SAP HANA Migration Guide. It will help you to migrate your existing on-premises or AWS-based SAP HANA workloads to AWS.


Jeff;

How to hide real name and email address on Windows 10 Lock Screen

The content below is taken from the original (How to hide real name and email address on Windows 10 Lock Screen), to continue reading please visit the site. Remember to respect the Author & Copyright.

How to hide real name and email address on Windows 10 Lock Screen

For privacy and security reasons you may decide that you do not want to display your real name and your email address on your Windows 10 lock screen, where you enter your PIN or Password to sign in. This post will show you how to hide your real name and email address in Windows 10 Lock Screen using Group Policy, Registry – and via Settings app too, from Build 14328 onwards.

Hide real name & email address on Windows 10 Lock Screen

Using Group Policy setting

Do not display last user name

Run gpedit.msc and navigate to the following setting:

Computer Configuration > Windows Settings > Security Settings > Local Policies > Security Options

Double-click on Interactive logon: Do not display last user name settings and select Enable.

This security setting determines whether the name of the last user to log on to the computer is displayed in the Windows logon screen. If this policy is enabled, the name of the last user to successfully log on is not displayed in the Logon Screen.

Click Apply and exit.

Using Registry Editor



Run regedit and navigate to the following registry key:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System

In the right pane, double-click on dontdisplaylastusername and change its value from 0 to 1 as shown below.

DontDisplayLockedUserID

You will also see a DontDisplayLockedUserId key there. The possible values for DontDisplayLockedUserID are:

  • 1 : Show the locked user display name and the user ID
  • 2 : Show the locked user display name only
  • 3 : Do not display the locked user information

Select 2 or 3 depending on your preference, save the settings and exit the Registry Editor.

Windows 10 will now hide your real name and email address on the Lock Screen.

In Windows 10 Anniversary Update Build 14328 and later, you will now be able to easily effect this change. You will get this setting under: Settings > Accounts > Sign-in options > Privacy > Show account details on sign-in screen.



Anand Khanse aka HappyAndyK is an end-user Windows enthusiast, a Microsoft MVP in Windows, since 2006, and the Admin of TheWindowsClub.com. Please read the entire post & the comments first, create a System Restore Point before making any changes to your system & be careful about any 3rd-party offers while installing freeware.

10 most in-demand Internet of Things skills

The content below is taken from the original (10 most in-demand Internet of Things skills), to continue reading please visit the site. Remember to respect the Author & Copyright.

Most in-demand Internet of Things skills
internet of thing skills

Image by Pixabay

The Internet of Things (IoT) is in the midst of an explosion, as more connected devices proliferate. But there’s not enough talent with the right skills to manage and execute on IoT projects. In fact, insufficient staffing and lack of expertise is the top-cited barrier for organizations currently looking to implement and benefit from IoT, according to research from Gartner.

"We’re seeing tech companies around the globe getting organized and creating IoT strategies, but where they’re struggling is they don’t have the processes and talent in-house to make these things happen," says Ryan Johnson, categories director for global freelance marketplace Upwork. By tracking data from Upwork’s extensive database, Johnson and his team have identified the top 10 skills companies need to drive a successful IoT strategy.

Data is sourced from the Upwork database and is based on annual job posting growth and skills demand, as measured by the number of job posts mentioning these skills posted on Upwork from October 2014 to December 2015.

1. Circuit design – 231 percent
1. Circuit design - 231 percent

Connected devices require companies to adjust and adapt chip design and development to account for new system requirements. For example, applications that rely on long-life batteries may need to have specially designed circuit boards to optimize power consumption, or have multiple chips and sensors on one circuit board. "Within circuit design, we’re seeing strong demand for printed circuit board (PCB) and 3D design," Johnson says.

2. Microcontroller programming – 225 percent
2. Microcontroller programming - 225 percent

The Internet of Things is comprised of billions of small, interconnected devices, many of which require, at minimum, a microcontroller to add intelligence to the device to help with processing tasks. Microcontrollers are low cost, low power, embedded chips that have programming and data memory built onto the system." Specific to microcontroller languages, we’re seeing particularly strong demand for professionals experienced with the Arduino programming language, which is commonly used in building sensor and automation projects," says Johnson.

3. AutoCAD – 216 percent
3. AutoCAD - 216 percent

AutoCAD is the premier design software for engineering applications and has seen strong growth as the number and complexity of IoT devices continues to increase. Smart, connected products often require a whole new set of design principles, such as designs that achieve hardware standardization or enable personalization. "Product development processes will need to accommodate late-stage design changes quickly and efficiently, making this an ideal use-case for freelance talent that’s skilled in AutoCAD," Johnson says.

[ Related story: Project managers, tech sales pros in high demand ]

4. Machine learning – 199 percent
4. Machine learning - 199 percent

Machine learning algorithms help create smarter appliances, applications and other products by using data sensors and other connected devices. Machine learning algorithms can be used to make predictions based on identifying data patterns from these devices, but that requires experts in big data management and machine learning, Johnson says. "To help give companies a competitive advantage, they’re hiring data scientists to build adaptive algorithms and data analytics capabilities to extract the value of this new data," he says.

5. Security infrastructure – 194 percent
5. Security infrastructure - 194 percent

Information security and fears of increased exposure of data, not to mention device and physical security, are some of the top impediments to IoT development, according to research from TEKsystems. "Companies experienced in cloud security have had a good introduction to this, however the added scale and complexity of IoT connectivity, communications, and the endpoints themselves complicate things. Within security infrastructure, we’re seeing strong demand on our platform for network security developers and programmers," Johnson says.

6. Big data – 183 percent
6. Big data - 183 percent

IoT has greatly increased the amount of data for organizations to analyze. Companies need to collect all the data that is relevant to their business while simultaneously filtering out redundant data and protecting that data. This requires a highly efficient mechanism that includes software and protocols, Johnson says. "As IoT and the proliferation of big data continues to rise, we’re seeing strong demand for data scientists and back-end engineers who can collect, organize, analyze and architect these disparate sources of data. Within that, we’re seeing particularly strong demand for professionals who have experience with Hadoop and Apache Spark," he says.

7. Electrical engineering – 159 percent
7. Electrical engineering - 159 percent

The creation of the next generation of connected devices requires both software and electrical engineering expertise. "Electrical engineers are being brought in to help with embedded device development for mobile applications, and for radio frequency (RF)/analog and microwave engineering for communication systems and GPS on the devices," Johnson says.

8. Security engineering – 124 percent
8. Security engineering - 124 percent

Security is such a huge concern in the IoT market. High-profile data breaches have heightened consumers’ awareness of data security and privacy issues that may occur if a connected device is breached or hacked and data exposed, Johnson says. "To help mitigate against potential risks, companies are investing in security engineering to conduct in-depth assessments to identify both physical and logical secured threats to embedded systems such as local controllers/gateways and determine the risk at the device level. We’re seeing strong demand for professionals with security analysis and vulnerability assessment experience," Johnson says.

9. Node.js – 86 percent
9. Node.js - 86 percent

Node.js is an open-source environment for server-side web development used to manage connected devices such as the Arduino and Raspberry Pi, among others. With the availability of boards like Raspberry Pi, Node.js is becoming more of an option for developers looking to leverage their existing expertise in building applications for IoT, says Johnson. "Node.js also has very low resource requirements, a feature that developers are already leveraging in data-intensive IoT scenarios. From wearables to machine-to-machine (M2M) communication, Node.js is rapidly becoming the language and platform of choice for IoT," he says.

10. GPS development – 66 percent
10gps development

The GPS market is seeing a resurgence, thanks to IoT. Specifically, wearables, smart vehicles and logistics companies. Analyst firm ABI predicts that the GPS market will reach out $3.5 billion in 2019 as businesses and consumers embrace location-aware devices. "This is creating a demand for professionals who can help develop GPS-enabled technology for wearables, smart vehicles and other IoT applications," says Johnson.

To read this article in full or to leave a comment, please click here

Cloud28+ turns its cloud catalog into an enterprise app store

The content below is taken from the original (Cloud28+ turns its cloud catalog into an enterprise app store), to continue reading please visit the site. Remember to respect the Author & Copyright.

Cloud28+, the cloud services federation backed by Hewlett Packard Enterprise, now wants to help you install enterprise applications, not just choose them from its catalog.

Although HPE is the driving force behind Cloud28+, the federation of independent software vendors, resellers and service providers now has 225 members, which are pushing to simplify cloud software deployment.

The federation plans to open its new App Center for business later this summer, and will begin stocking its virtual shelves on June 7 with the opening of an App Onboarding Center. This will containerize workloads submitted by vendors and resellers and test them for compatibility, initially for free.

The workloads on offer will be more diverse than those contained in the original Cloud28+ catalog, as the federation is opening up to additional technology platforms. Once strictly an open source, OpenStack shop, it is now embracing Microsoft Azure, VMware, Ormuco and Docker.

Docker is key to the App Center’s operation, in fact, as Cloud28+ has adopted Docker Datacenter as its containers-as-a-service framework.

“The value of Cloud 28+ is in multi-cloud with monetization at the edge,” said Xavier Poisson, HPE’s vice president for hybrid IT in Europe, Middle East and Africa, at a meeting for Cloud28+ partners on Thursday.

That monetization at the edge means that HPE, through Cloud28+, is providing a common way for cloud infrastructure providers around Europe to deliver software and services to end users.

Another of the federation’s innovations announced Thursday is the introduction of a series of tools allowing vendors and service providers to market their services, generate customer leads and track their performance.

One of the advantages of Cloud28+ for participating service providers operating in only one country is that it makes them more visible to independent software vendors from other countries looking for new distribution channels, said Khaled Chaar, managing director of German cloud service provider Pironet NDH. That enables companies like his to offer their customers a broader range of cloud-ready software, he said.

Although many applications are already cloud-ready, for a typical business the 20 percent or so of the applications it uses that aren’t cloud-ready are probably the most valuable, industry-specific ones, he said. Making it easier to move those to the cloud will present significant advantages, he said.

Make Your PowerPoint Slides Suck Less With This New Shutterstock Plug-in

The content below is taken from the original (Make Your PowerPoint Slides Suck Less With This New Shutterstock Plug-in), to continue reading please visit the site. Remember to respect the Author & Copyright.

Preview how high-quality Shutterstock images look in your presentations before you commit to buying them.

How to monitor, measure, and manage your broadband consumption

The content below is taken from the original (How to monitor, measure, and manage your broadband consumption), to continue reading please visit the site. Remember to respect the Author & Copyright.

Forget that bass; in the digital world, it’s all about that bandwidth. You’re paying your ISP for a given amount of bandwidth, but it’s up to you to manage how it’s consumed. Whether or not you have a data cap—and even if your data cap is high enough that you never bang into it—simply letting all the devices on your network engage in a battle for supremacy is a recipe for problems.

You could experience poor video streaming, choppy VoIP calls, or debilitating lag in your online gaming sessions. And if you do have a data cap (and yes, they are evil), blowing through it can hit you in the pocketbook, expose you to throttling (where your ISP drastically, if temporarily, reduces your connection speed), or both.

Those are the problems, here are the solutions: We’ll show you how you can keep your ISP honest by measuring your Internet connection speed, so you can make sure you’re getting what you’re paying for; we’ll help you identify any bandwidth hogs on your network, so you can manage their consumption; and we’ll show you how you can tweak your router to deliver the best performance from everything on your home network.

Make sure you’re getting what you paid for

Your home network will most certainly be faster than your Internet connection, but it’s the speed of your Internet connection that will have the biggest impact on your media-streaming experience—at least when you’re streaming media from services such as Netflix, Amazon Video, Spotify, Tidal, and the like. So the first step in your bandwidth audit should be to verify that your ISP is delivering the speed you’re paying for (the vast majority of ISPs offer their services in tiers, charging more for higher speeds.

Speedtest

Results from Speedtest.net for my home connection.

The best way to do that is by visiting a third-party website such as Ookla’s Speedtest.net or—if you don’t like Flash—the HTML 5-based Speedof.me. To get accurate baseline speeds, check from a device that’s connected directly to your broadband gateway (i.e., your DSL or cable modem, not your router), with all other wired and wireless devices disconnected. You might even want to test a couple of times at different hours of the day, since speeds can vary. Additionally, run some tests while other devices are using the Internet to see the differences.

Compare your baseline results to the speeds your ISP has committed to deliver with the plan you’re paying for. If you’re seeing significantly lower speeds, call your provider ask them to check your connection. They might be able to run some diagnostics at their end and offer some suggestions to fix the problem before they send out a tech.

You also want to check the Internet speeds from any device you’re seeing performance issues on. Devices that are hardwired into the network should achieve speeds on par with your baseline if other devices on the network aren’t using much bandwidth. On wireless devices, the speeds can be greatly reduced when further away from the wireless router or if there’s interference from neighboring Wi-Fi networks, other wireless devices, or appliances that can cause interference (such as microwave ovens, which produce tremendous amounts of noise in the 2.4GHz frequency spectrum while operating).

How much bandwidth do you really need?

Keep in mind, the bandwidth your ISP promises to deliver isn’t a per-device ceiling—it’s the total bandwidth available for your Internet connection, so it’s shared among all the devices on your network. If you have a plan offering download speeds of 20Mbps and upload speeds of 1.0Mbps, for instance, and you have four devices connected to the Internet, you could say each device might see a maximum download speed of 5.0Mbps and a maximum upload speed of 0.25Mbps.

In reality, it’s not quite that simple. The manner in which your Internet bandwidth is distributed depends on your router and the demand from each device. With a simple router with factory-default settings, it’s every client device for itself in a mad scramble for bandwidth. Client devices that are sensitive to lag—media streamers, VoIP phones, and online games—can suffer in this scenario because applications that aren’t sensitive to lag—web browsers and email clients, for example—are treated the same as one that are. I’ll show you how you can manage your bandwidth later.

WRT1900ACS front

If you don’t configure your router properly, all the devices on your network will be treated equally in terms of bandwidth allocation.

To give you an idea of what’s acceptable for Internet speeds, I suggest having about 2.0Mbps of download speed per device for general usage (emailing and web browsing), and about 5.0Mbps of download speed for each HD video stream. So if one person on your network is watching YouTube videos, another is streaming a movie from Netflix, both are simultaneously using a tablet or smartphone to browse the web, while another is a on Skype video chat, I suggest having 19 Mbps of download bandwidth: that’s 5.0Mbps x 3 + 2.0Mbps x 2.

The maximum upload speed of your Internet connection typically isn’t as crucial, because most people consume more content than they create and upload to the Internet. That’s a good thing given that most ISPs deliver asymmetric service (i.e., download speeds that are much higher than upload speeds). Having said that, know that the upload speeds can make a huge difference for applications such as Skype or FaceTime since video is traveling in both directions—up and down—simultaneously. For high-quality (non-HD) video chats, I suggest adding about 0.5Mbps of upload bandwidth or about 1.5Mbps for full HD.

Your upload bandwidth also comes into play when you or others are remotely accessing devices or files on your network when you’re away from home. It’s hard to suggest a fixed number on that activity, though; just remember the faster the upload speed, the faster the file transfers and streams will be coming from your network.

Monitor your usage to identify bandwidth hogs

Whether you have a data cap or are having performance issues, consider tracking the bandwidth usage of all your devices to see who or what is hogging the most bandwidth.

You might consider using a Windows-based program like BitMeter OS (free and open-source) or NetWorx (also free), which are most useful if all or most of the Internet devices on the network are Windows PCs or laptops. These applications will track usage over time for the particular computer they’re installed on, and offer up graphs and tables of data you can review. You can also set a data quota and be alerted when a device approaches or exceeds that limit.

Networx

NetWorx provides detailed bandwidth reports, but only for the PCs on your network.

If you’re using multiple types of devices on the network—smartphones, tablets, gaming consoles, and TVs, in addition to computers running Windows—it would be ideal to track the entire network’s bandwidth from a single point, so you don’t have to setup tracking on each device. Since the Internet traffic of each device needs to be monitored, it’s not as easy as installing a simple program on a PC. The traffic must be monitored from the router or another device strategically placed between the Internet connection and the network clients.

Although most routers don’t track bandwidth consumption by network device, consider checking yours just in case. If your router doesn’t support it, consider buying another router or flashing a supported router with aftermarket firmware that does support it. If you decide to buy a new router, the enterprise-oriented Open Mesh routers and access points provide quite a bit of bandwidth usage details. Their hardware can be managed via a free online account. and it supports wireless mesh-networking technology that makes it easier to broaden your Wi-Fi coverage.

Open Mesh network

Open Mesh shows a graph of bandwidth usage of each client, top clients, top devices, top applications, and top APs on its Network Overview page.

If you don’t want to replace your router, flashing it with aftermarket firmware is a good option, provided your router has that capability.DD-WRT is one popular aftermarket firmware that supports many router brands and models; but by default, it shows only your total bandwidth usage. To find the usage per client or device, you’d also need to install an add-on like DDWRT-BWMON.

Cucumber Tony is a lesser-known firmware to consider. I reviewed it for TechHive’s sister site NetworkWorld recently and found that it supports a couple of different router brands. Gargoyle is another firmware you might not be familiar with. It offers some good bandwidth monitoring and control functionality, with support for a few router brands.

For the more adventurous, another option is to build your own router out of an old or spare PC, or even run it on your main PC with a virtual machine. Sophos UTM and Untangle, for instance, are operating systems that provide routing, firewall, web filtering, bandwidth monitoring, and many more network functions.

Cucumber

Cucumber Tony shows a graph and table of each device’s bandwidth usage on the Clients page.

Utilize your router’s QoS to distribute bandwidth

Most routers have a quality-of-service (QoS) feature, but it’s not enabled by default on some routers. The idea behind QoS is to regulate bandwidth usage in a way that ensures good performance on the network, particularly with more sensitive types of services such as video streams, VoIP calls, and online gaming, where any lag can be quite noticeable. It basically gives these types of traffic higher priority—on the network and to Internet access from the network—compared to services that aren’t sensitive to lag (e.g., file downloads, torrents, software updates, and general web browsing).

The exact QoS features and settings vary between by router brands and models, but most provide a way for you to give particular devices higher priority by tagging their MAC or IP address, or by marking types of services for higher priority. Some routers come with a collection of default QoS settings that you can tweak and customize.

Login to your router and see if it has any QoS settings. Take a look at the default settings, as it might already give the most common services higher priority. If not, see if it allows you to classify traffic based upon the service type. I suggest going that route first to help alleviate any performance issues on the network. Secondly, you could consider prioritizing any critical devices you’d like to have higher priority.

Netgear QoS

This Netgear WNR2000 802.11n router has QoS pre-configured for a limited number of applications, but you must configure your own rules for anything the manufacturer didn’t think of.

Optimize your network to increase speeds

At first thought, your Internet connection seems to be the bottleneck to the Internet. Your local network might be able to handle up to 1000Mbps of bandwidth, while your Internet-download speeds are likely less than 60Mbps (much less than that if you’re relying on DSL or—shudder—satellite Internet service). You’d think that your network could easily handle it, but sometimes that’s not the case. This is especially true when you have many devices on the network, particularly Wi-Fi devices.

You might not need super-fast speeds for every device or online service, but the quicker any device is served by the router means the more time it has to serve the other devices on the network. Thus, increasing the speeds of just one device could have an impact on the others. The more devices you get faster, the more noticeable the increased performance may be, especially for those sensitive services.

Whenever possible, connect computers and devices to the router or network via an ethernet cable. This helps alleviate the congestion on the airwaves, which is a much more complex and imperfect connection medium than a cable.

For devices that can’t be hardwired, try to utilize router’s 5GHz frequency band as much as possible, as the 2.4GHz band is much more congested and prone to interference. For network clients that can connect only to your 2.4GHz network, check channel usage so you can use the least-crowded channel available. Additionally, ensure you’re using only WPA2 security for your Wi-Fi, as enabling the first-generation WPA (or the even older, insecure WEP) limits wireless speeds.

If your wireless router doesn’t support 5GHz, I suggest upgrading to a dual-band router so you can utilize these faster and higher quality frequencies. Keep in mind, the Wi-Fi devices must also specially support 5GHz, otherwise they’ll still be connecting via 2.4GHz. For computers and devices that can be upgraded to 5GHz Wi-Fi, I suggest doing so. If you have multiple devices without 5GHz, I suggest upgrading the ones with any performance issues first.

Finally, evaluate your Wi-Fi coverage to ensure that your wireless router is placed in the most central spot around where you use the wireless devices most often. If you still regularly have low or poor Wi-Fi signals, consider extending your network.

This story, “How to monitor, measure, and manage your broadband consumption” was originally published by

TechHive.

Google has a new chip that makes machine learning way faster

The content below is taken from the original (Google has a new chip that makes machine learning way faster), to continue reading please visit the site. Remember to respect the Author & Copyright.

Google has taken a big leap forward with the speed of its machine learning systems by creating its own custom chip that it’s been using for over a year.

The company was rumored to have been designing its own chip, based partly on job ads it posted in recent years. But until today it had kept the effort largely under wraps.

It calls the chip a Tensor Processing Unit, or TPU, named after the TensorFlow software it uses for its machine learning programs. In a blog post, Google engineer Norm Jouppi refers to it as an accelerator chip, which means it speeds up a specific task.

At its I/O conference Wednesday, CEO Sundar Pichai said the TPU provides an order of magnitude better performance per watt than existing chips for machine learning tasks. It’s not going to replace CPUs and GPUs but it can speed up machine learning processes without consuming a lot more more energy.

As machine learning becomes more widely used in all types of applications, from voice recognition to language translation and and data analytics, having a chip that speeds those workloads is essential to maintaining the pace of advancements.

And as Moore’s Law slows down, reducing the gains from each new generation of processor, using accelerators for key tasks becomes even more important. Google says its TPU provides the equivalent gains to moving Moore’s Law forward by three generations, or about seven years.

The TPU is in production use across Google’s cloud, including powering the RankBrain search result sorting system and Google’s voice recognition services. When developers pay to use the Google Voice Recognition Service, they’re using its TPUs.

Urs Hölzle, Google’s senior vice president for technical infrastructure, said during a press conference at I/O that the TPU can augment machine learning processes but that there are still functions that require CPUs and GPUs.

Google started developing the TPU about two years ago, he said.

Right now, Google has thousands of the chips in use. They’re able to fit in the same slots used for hard drives in Google’s data center racks, which means the company can easily deploy more of them if it needs to.

Right now, though, Hölzle says that they don’t need to have a TPU in every rack just yet.

If there’s one thing that Google likely won’t do, it’s sell TPUs as standalone hardware. Asked about that possibility, Google enterprise chief Diane Greene said that the company isn’t planning to sell them for other companies to use.

Part of that has to do with the way application development is heading — developers are building more and more applications in the cloud only, and don’t want to worry about managing hardware configurations, maintenance and updates.

Another possible reason is that Google simply doesn’t want to give its rivals access to the chips, which it likely spent a lot of time and money developing. 

We don’t yet know what exactly the TPU is best used for. Analyst Patrick Moorhead said he expects the chip will be used for inferencing, a part of machine learning operations that doesn’t require as much flexibility.

Right now, that’s all Google is saying. We still don’t know which chip manufacturer is building the silicon for Google. Holzle said that the company will reveal more about the chip in a paper to be released this fall.

Google’s new tools make it easier to integrate apps with its spreadsheets and slides

The content below is taken from the original (Google’s new tools make it easier to integrate apps with its spreadsheets and slides), to continue reading please visit the site. Remember to respect the Author & Copyright.

Google is updating the developer tools for its Docs productivity suite in an effort to make it easier for companies to integrate third-party applications with its presentation, spreadsheet and word processing software. 

Software makers can start working with a new tool that lets them sync data between a Google Sheet and their application for easy data compilation and sharing among people who use the online spreadsheet software. In addition, Google also announced a new Slides API that will allow users to automatically populate slide decks with information from outside sources. 

Software packages like Google Docs don’t exist in a vacuum, and offering developers a way to more deeply integrate with the company’s products could lead to more companies becoming interested in picking up the productivity suite because of how it works with other software. 

Case in point: Salesforce is using the new Sheets API to sync data from a customer’s CRM into a spreadsheet, which can then be shared with other people inside or outside the user’s organization. When information gets changed in Salesforce, it’ll propagate across any spreadsheets synced with it.

That’s useful for making sure that information shared within an organization using Google Sheets is up to date and accurate. 

The Slides API integration is designed to make it easier for business users to create visual presentations without a whole lot of effort. For example, Trello is working on a feature that would let users take items stored on a “board” in its application and turn them into slides with a couple of clicks, without having to go through all the trouble of building a slide deck by hand. 

Security-conscious businesses might not be a fan of these integrations yet. Right now, it’s not possible to programmatically exclude users from seeing information they’re not supposed to while still sharing a spreadsheet or slide deck with them. If people are working on a team with tightly secured information that shouldn’t be shared with others, these features kind of backfire. 

For smaller organizations, or teams that aren’t as concerned about keeping information under wraps, the integrations these APIs open up will likely be welcome extensions to Sheets and Slides.

The company announced the updates at its I/O developer conference Wednesday, which included a number of other announcements like more details about the future of Android and information on the company’s VR ambitions. 

RES Enhances RES ONE Workspace with Native Automation Technology and Announces RES ONE Workspace Core

The content below is taken from the original (RES Enhances RES ONE Workspace with Native Automation Technology and Announces RES ONE Workspace Core), to continue reading please visit the site. Remember to respect the Author & Copyright.

RES , the leader in enabling, automating and securing digital workspaces, today announced the addition of enterprise automation to its RES ONE… Read more at VMblog.com.

I Love My Amazon WorkSpace!

The content below is taken from the original (I Love My Amazon WorkSpace!), to continue reading please visit the site. Remember to respect the Author & Copyright.

Early last year my colleague Steve Mueller stopped by my office to tell me about an internal pilot program that he thought would be of interest to me. He explained that they were getting ready to run Amazon WorkSpaces on the Amazon network and offered to get me on the waiting list. Of course, being someone that likes to live on the bleeding edge, I accepted his offer.

Getting Started
Shortly thereafter I started to run the WorkSpaces client on my office desktop, a fairly well-equipped PC with two screens and plenty of memory. At that time I used the desktop during the working day and a separate laptop when I was traveling or working from home. Even though I used Amazon WorkDocs to share my files between the two environments, switching between them caused some friction. I had distinct sets of browser tabs, bookmarks, and the like. No matter how much I tried, I could never manage to keep the configurations of my productivity apps in sync across the environments.

After using the WorkSpace at the office for a couple of weeks, I realized that it was just as fast and responsive as my desktop. Over that time, I made the WorkSpace into my principal working environment and slowly severed my ties to my once trusty desktop.

I work from home two or three days per week. My home desktop has two large screens, lots of memory, a top-notch mechanical keyboard, and runs Ubuntu Linux. I run VirtualBox and Windows 7 on top of Linux. In other words, I have a fast, pixel-rich environment.

Once I was comfortable with my office WorkSpace, I installed the client at home and started using it there. This was a giant leap forward and a great light bulb moment for me. I was now able to use my fast, pixel-rich home environment to access my working environment.

At this point you are probably thinking that the combination of client virtualization and server virtualization must be slow, laggy, or less responsive than a local device. That’s just not true! I am an incredibly demanding user. I pound on the keyboard at a rapid-fire clip, I keep tons of windows open, alt-tab between them like a ferret, and I am absolutely intolerant of systems that get in my way.  My WorkSpace is fast and responsive and makes me even more productive.

Move to Zero Client
A few months in to my WorkSpaces journey, Steve IM’ed me to talked about his plan to make some Zero Client devices available to members of the pilot program. I liked what he told me and I agreed to participate. He and his sidekick Michael Garza set me up with a Dell Zero Client and two shiny new monitors that had been taking up space under Steve’s desk. At this point my office desktop had no further value to me. I unplugged it, saluted it for its meritorious service, and carried it over to the hardware return shelf in our copy room.  I was now all-in, and totally dependent on, my WorkSpace and my Zero Client.

The Zero Client is a small, quiet device. It has no fans and no internal storage. It simply connects to the local peripherals (displays, keyboard, mouse, speakers, and audio headset) and to the network. It produces little heat and draws far less power than a full desktop.

During this time I was also doing quite a bit of domestic and international travel. I began to log in to my WorkSpace from the road. Once I did this, I realized that I now had something really cool—a single, unified working environment that spanned my office, my home, and my laptop. I had one set of files and one set of apps and I could get to them from any of my devices. I now have a portable desktop that I can get to from just about anywhere.

The fact that I was using a remote WorkSpace instead of local compute power faded in to the background pretty quickly. One morning I sent the team an email with the provocative title “My WorkSpace has Disappeared!” They read it in a panic, only to realize that I had punked them, and that I was simply letting them know that I was able to focus on my work, and not on my WorkSpace. I did report a few bugs to them,  none of which were serious, and all of which were addressed really quickly.

Dead Laptop
The reality of my transition became apparent late last year when the hard drive in my laptop failed one morning. I took it in to our IT helpdesk and they replaced the drive. Then I went back up to my office, reinstalled the WorkSpaces client, and kept on going. I installed no other apps and didn’t copy any files. At this point the only personal items on my laptop are the registration code for the WorkSpace and my stickers! I do still run PowerPoint locally, since you can never know what kind of connectivity will be available at a conference or a corporate presentation.

I also began to notice something else that made WorkSpaces different and better. Because laptops are portable and fragile, we all tend to think of the information stored on them as transient. In the dark recesses of our minds we know that one day something bad will happen and we will lose the laptop and its contents. Moving to WorkSpaces takes this worry away. I know that my files are stored in the cloud and that losing my laptop would be essentially inconsequential.

It Just Works
To borrow a phrase from my colleague James Hamilton, WorkSpaces just works. It looks, feels, and behaves just like a local desktop would.

Like I said before, I am demanding user. I have two big monitors, run lots of productivity apps, and keep far too many browser windows and tabs open. I also do things that have not been a great fit for virtual desktops up until now. For example:

Image Editing – I capture and edit all of the screen shots for this blog (thank you, Snagit).

Audio Editing – I use Audacity to edit the AWS Podcasts. This year I plan to use the new audio-in support to record podcasts on my WorkSpace.

Music – I installed the Amazon Music player and listen to my favorite tunes while blogging.

Video – I watch internal and external videos.

Printing – I always have access to the printers on our corporate network. When I am at home, I also have access to the laser and ink jet printers on my home network.

Because the WorkSpace is running on Amazon’s network, I can download large files without regard to local speed limitations or bandwidth caps. Here’s a representative speed test (via Bandwidth Place):

Sense of Permanence
We transitioned from our pilot WorkSpaces to our production environment late last year and are now provisioning WorkSpaces for many members of the AWS team. My WorkSpace is now my portable desktop.

After having used WorkSpaces for well over a year, I have to report that the biggest difference between it and a local environment isn’t technical. Instead, it simply feels different (and better).  There’s a strong sense of permanence—my WorkSpace is my environment, regardless of where I happen to be. When I log in, my environment is always as I left it. I don’t have to wait for email to sync or patches to install, as I did when I would open up my laptop after it had been off for a week or two.

Now With Tagging
As enterprises continue to evaluate, adopt, and deploy WorkSpaces in large numbers, they have asked us for the ability to track usage for cost allocation purposes. In many cases they would like to see which WorkSpaces are being used by each department and/or project. Today we are launching support for tagging of WorkSpaces. The WorkSpaces administrator can now assign up to 10 tags (key/value pairs) to each WorkSpace using the AWS Management Console, AWS Command Line Interface (CLI), or the WorkSpaces API. Once tagged, the costs are visible in the AWS Cost Allocation Report where they can be sliced and diced as needed for reporting purposes.

Here’s how the WorkSpaces administrator can use the Console to manage the tags for a WorkSpace:

Tags are available today in all Regions where WorkSpaces is available: US East (Northern Virginia), US West (Oregon), Europe (Ireland), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Sydney).

Learning More
If you have found my journey compelling and would like to learn more, here are some resources to get you started:

Request a Demo
If you and your organization could benefit from Amazon WorkSpaces and would like to learn more, please get in touch with our team at [email protected].

Jeff;

Open, Linux-based platform simplifies wireless IoT

The content below is taken from the original (Open, Linux-based platform simplifies wireless IoT), to continue reading please visit the site. Remember to respect the Author & Copyright.

Sierra Wireless and Element14 unveiled an open-spec Arduino compatible “mangOH Green IoT Platform” based on Sierra’s 3G, GNSS, and WiFi modules running Linux. Sierra Wireless announced a beta release of its AirPrime WP module and open-source “mangOH” carrier board last June. Now, the company has formally released the products with the help of Element14, which […]

Raspberry Pi Zero gains a camera connector

The content below is taken from the original (Raspberry Pi Zero gains a camera connector), to continue reading please visit the site. Remember to respect the Author & Copyright.

Raspberry Pi Zero gains a camera connector

30,000 hot Pis in stores now and factory ready to bake plenty more

Raspberry Pi Zero v. 1.3 with camera connector

The Raspberry Pi Zero 1.3 with camera connector

The Raspberry Pi Zero has added a camera connector.

Chief Pi guy Eben Upton has explained that the new connector came about as a result of colossal demand for the minuscule computer.

The factory baking Pis could not keep up with demand for the Zero and then had to pause production once the Raspberry Pi 3 debuted.

Upton says during that pause the Pi team discovered “the same fine-pitch FPC connector that we use on the Compute Module Development Kit just fits onto the right hand side of the board”. The outfit is therefore offering a cable that connects to the the FPC slot on one side and the Raspberry Pi camera module on the other.

Raspberry Pi evangelist Matt Richardson, who devised the cable, has shown it off on Twitter.

Upton says 30,000 of the new model, version 1.3, are available now and that the Pi bakery is going to keep churning them out until users’ appetites are sated. ®

Sponsored:
Middleware for the modern age

OpenStack Developer Mailing List Digest May 7-13

The content below is taken from the original (OpenStack Developer Mailing List Digest May 7-13), to continue reading please visit the site. Remember to respect the Author & Copyright.

SuccessBot Says

  • Pabelanger: bare-precise has been replaced by ubuntu-precise. Long live DIB
  • bknudson: The Keystone CLI is finally gone. Long live openstack CLI.
  • Jrichli: swift just merged a large effort that started over a year ago that will facilitate new capabilities – like encryption
  • All

Release Count Down for Week R-20, May 16-20

  • Focus
    • Teams should have published summaries from summit sessions to the openstack-dev mailing list.
    • Spec writing
    • Review priority features
  • General notes
    • Release announcement emails will be tagged with ‘new’ instead of ‘release’.
    • Release cycle model tags now say explicitly that the release team manages releases.
  • Release actions
    • Release liaisons should add their name and contact information to this list [1].
    • New liaisons should understand release instructions [2].
    • Project teams that want to change their release model should do so before the first milestone in R-18.
  • Important dates
    • Newton 1 milestone: R-18 June 2
    • Newton release schedule [3]

Collecting Our Wiki Use Cases

  • At the beginning, the community has been using a wiki [4] as a default community information publication platform.
  • There’s a struggle with:
    • Keeping things up-to-date.
    • Prevent from being vandalized.
    • Old processes.
    • Projects that no longer exist.
  • This outdated information can make it confusing to use, especially newcomers, that search engines will provides references to.
  • Various efforts have happened to push information out of the wiki to proper documentation guides like:
    • Infrastructure guide [5]
    • Project team guide [6]
  • Peer reviewed reference websites:
  • There are a lot of use cases that a wiki is a good solution, and we’ll likely need a lightweight publication platform like the wiki to cover those use cases.
  • If you use the wiki as part of your OpenStack work, make sure it’s captured in this etherpad [9].
  • Full thread

Supporting Go (continued)

  • Continuing from previous Dev Digest [10].
  • Before Go 1.5 (without the -buildmode=shared) it didn’t support the concept of shared libraries. As a consequence, when a library upgrades, the release team has to trigger rebuild for each and every reverse dependency.
  • In Swift’s case for looking at Go, it’s hard to write a network service in Python that shuffles data between the network and a block device and effectively use all the hardware available.
    • Fork()’ing child processes using cooperative concurrency via eventlet has worked well, but managing all async operations across many cores and many drives is really hard. There’s not an efficient interface in Python. We’re talking about efficient tools for the job at hand.
    • Eventlet, asyncio or anything else single threaded will have the same problem of the filesystem syscalls taking a long time and the call thread can be blocked. For example:
      • Call select()/epoll() to wait for something to happen with many file descriptors.
      • For each ready file descriptor, if the file descriptor socket is readable, read it, otherwise EWOULDBLOCK is returned by the kernel, and move on to the next file descriptor.
  • Designate team explains their reasons for Go:
    • MiniDNS is a component that due to the way it works, it’s difficult to make major improvements.
    • The component takes data and sends a zone transfer every time a record set gets updated. That is a full (AXFR) zone transfer where every record in a zone gets sent to each DNS server that end users can hit.
      • There is a DNS standard for incremental change, but it’s complex to implement, and can often end up reverting to a full zone transfer.
    • Ns[1-6].example.com may be tens or hundreds of servers behind anycast Ips and load balancers.
    • Internal or external zones can be quite large. Think 200-300Mb.
    • A zone can have high traffic where a record is added/removed for each boot/destroy.
    • The Designate team is small, and after looking at options, judging the amount of developer hours available, a different language was decided.
  • Looking at Designates implementation, there are some low-hanging fruit improvements that can be made:
    • Stop spawning a thread per request.
    • Stop instantiating Oslo config object per request.
    • Avoid 3 round trips to the database every request. Majority of the request here is not spent in Python. This data should be trivial to cache since Designate knows when to invalidate the cache data.
      • In a real world use case, there could be a cache miss due to the shuffle order of multiple miniDNS servers.
  • The Designate team saw 10x improvement for 2000 record AXFR (without caching). Caching would probably speed up the Go implementation as well.
  • Go historically has poor performance with multiple cores [11].
    • Main advantages with the language could be CSP model.
    • Twisted does this very well, but we as a community consistently support eventlet. Eventlet has threaded programming model, which is poorly suited for Swift’s case.
    • PyPy got a 40% performance improvement over Cpython for a brenchmark of Twisted’s DNS component 6 years ago [12].
  • Right now our stack already has dependency C, Python, Erlang, Java, Shell, etc.
  • End users emphatically do not care about the language API servers were written in. They want stability, performance and features.
  • The Infrastructure related issues with Go for reliable builds, packaging, etc is being figured out [13]
  • Swift has tested running under PyPy with some conclusions:
    • Assuming production-ready stability of PyPy and OpenStack, everyone should use PyPy over CPython.
      • It’s just simply faster.
      • There are some garbage collector related issues to still work out in Swift’s usage.
      • There are a few patches that do a better job of socket handling in Swift that runs better under PyPy.
    • PyPy only helps when you’ve got a CPU-constrained environment.
    • The GoLang targets in Swift are related to effective thread management syscalls, and IO.
    • See a talk from the Austin Conference about this work [14].
  • Full thread

 

Google reveals the Chromium OS it uses to run its own containers

The content below is taken from the original (Google reveals the Chromium OS it uses to run its own containers), to continue reading please visit the site. Remember to respect the Author & Copyright.

Google’s decided the Chromium OS is its preferred operating system for running containers in its own cloud. And why wouldn’t it – the company says it uses it for its own services.

The Alphabet subsidiary offers a thing called “Container-VM” that it is at pains to point out is not a garden variety operating system you’d ever contemplate downloading and using in your own bit barn. Container-VM is instead dedicated to running Docker and Kubernetes inside Google’s cloud.

The Debian-based version of Container-VM has been around for a while, billed as a “container-optimised OS”.

Now Google has announced a new version of Container-VM “based on the open source Chromium OS project, allowing us greater control over the build management, security compliance, and customizations for GCP.”

The new Container-VM was built “primarily for running Google services on GCP”.

We therefore have here an OS built by Google to run Google itself, and now available to you if you want to run containers on Google, which is of course a leading users of containers and creates billions of them every week.

It’s not unusual for a cloud provider to offer tight integration between their preferred operating systems and their clouds. Amazon Linux is designed to work very well in Amazon Web Services. Oracle wants you to take it as Red all the way up and down its stack. We also know that Windows 2016 Server’s container-friendly Nano Server has powered Azure since 2016.

So Google’s not ahead of the pack here. But it does now have a rather stronger container story to tell. ®

Sponsored:
Implementing high availability and disaster recovery in IBM PureApplication systems V2

Linksys will let you use open router code under new FCC rules

The content below is taken from the original (Linksys will let you use open router code under new FCC rules), to continue reading please visit the site. Remember to respect the Author & Copyright.

While the FCC’s imminent rules for wireless device interference are supposed to allow hackable WiFi routers, not every router maker sees it that way. TP-Link, for instance, is blocking open source firmware out of fear that you’ll run afoul of the regulations when they kick in on June 2nd. However, you won’t have to worry about that with Linksys’ fan-friendly networking gear. The Belkin-owned brand promises Ars Technica that its modifiable routers will allow open source firmware while obeying the FCC’s rules — you can tinker without fear of messing with nearby radar systems.

The hardware’s design is the key. Linksys says it’s been working with both the chip designers at Marvell and the developers of OpenWRT to make this work. The WRT routers separate the RF wireless data from the firmware, preventing you from stepping out of bounds. You theoretically can’t hack those limits, even though you have control over most everything else.

This won’t please those who think that any restriction on open source firmware is one too many. OpenWRT’s Imre Kaloz asks Ars why the FCC didn’t just punish infractions instead. However, Linksys’ solution shows that there’s at least some possibility of compromise between raw flexibility and safety.

Source: Ars Technica

How to Survive a Learning to Code Course, From a Recent Graduate

The content below is taken from the original (How to Survive a Learning to Code Course, From a Recent Graduate), to continue reading please visit the site. Remember to respect the Author & Copyright.

Ever considered learning how to code? Peter Hyde shares his top tips for surviving a coding course.

The sun sets on Xbox’s ‘Project Spark’ game creation tool

The content below is taken from the original (The sun sets on Xbox’s ‘Project Spark’ game creation tool), to continue reading please visit the site. Remember to respect the Author & Copyright.

Starting today, Project Spark, Microsoft’s quirky game creation game, is no longer for sale. And come August 12th, the servers will be shut down, Thomas Gratz of developer Team Dakota writes. As a consolation, anyone who bought the retail version "starter kit" will get a credit to their Microsoft account. If you redeemed the code inside after October 5th of last year (when the game went free-to-play) and prior to today, you’ll get a credit to use in the Xbox or Windows stores. Gratz says that the credits will be automatically applied for eligible customers.

There is some silver lining though. Gratz notes that no layoffs occurred as team members transitioned to other places within Microsoft after active development of the tool stopped last fall. Maintaining its behind the scenes aspects wasn’t possible with a small group, though, hence the shut-down. Farewell, Project Spark, and thanks for giving Xbox One owners a chance at playing a version of P.T. on their console.

Via: Kotaku

Source: Project Spark

Tiny $1 STEM-oriented hacker board hits Indiegogo

The content below is taken from the original (Tiny $1 STEM-oriented hacker board hits Indiegogo), to continue reading please visit the site. Remember to respect the Author & Copyright.

Like the tiny BBC Micro:bit board, the “One Dollar Board” is aimed at introducing kids to computer programming and the Internet of Things at a young age. A team of Brazilian developers has just launched a “One Dollar Board” Indiegogo campaign aimed at funding a tiny, open source microcontroller board so simple and inexpensive that […]

Incremental update to Azure Stack PaaS services: Web Apps

The content below is taken from the original (Incremental update to Azure Stack PaaS services: Web Apps), to continue reading please visit the site. Remember to respect the Author & Copyright.

Today we released updates to App Service (Web Apps) for Azure Stack – you can find the bits here. This update streamlines setup/deployment while providing a more stable experience.

Please note that there is no in-place upgrade from the prior release. You will have to re-deploy to take advantage of these improvements.

To support this release we have updated our technical documentation to guide you through the improved experience.

Visit the Azure Stack forum for help or if you’d like to provide feedback. One specific topic we’d love your feedback on is which Azure Services you’d like to see come down to Azure Stack.

– The Microsoft Azure Stack Team

The Meraki Network: Cloud Managed Switches

Transform your experience with Meraki switches

Yun Shield adds OpenWrt and WiFi to Arduinos

The content below is taken from the original (Yun Shield adds OpenWrt and WiFi to Arduinos), to continue reading please visit the site. Remember to respect the Author & Copyright.

Arduino LLC released a shield version of its Arduino Yún SBC, letting you add a WiFi and Linux to Arduino boards, along with Ethernet and USB ports. The Arduino and Genuino Yún Shield peels off the OpenWrt-driven WiFi subsystem of the Arduino Yún SBC as a shield add-on, letting you add Internet access to other […]

Deploy your hybrid scenarios and solutions in Microsoft’s cloud

The content below is taken from the original (Deploy your hybrid scenarios and solutions in Microsoft’s cloud), to continue reading please visit the site. Remember to respect the Author & Copyright.

Are you trying to get a deeper understanding of how hybrid cloud scenarios can best serve your business? Just what are the elements of hybrid cloud for Microsoft’s cloud platforms and services? What layers are common across them?

The new Microsoft Hybrid Cloud for Enterprise Architects poster

The new Microsoft Hybrid Cloud for Enterprise Architects poster helps you:

  • Understand the breadth of support for hybrid scenarios in Microsoft’s cloud, including Azure PaaS, Azure IaaS, and PaaS (Office 365)
  • See the architecture of hybrid scenarios in Microsoft’s cloud and the common layers of on-premises infrastructure, networking, and identity
  • See the architecture of Azure PaaS-based hybrid apps and step through an example
  • See the architecture for Azure IaaS hybrid scenarios and line of business applications hosted on a cross-premises virtual network
  • See the architecture of SaaS-based hybrid scenarios and examples of hybrid configurations for Office 365

You can download this multi-page poster in PDF or Visio format. You can also print the six pages of this poster on tabloid format paper (also known as 11×17, ledger, or A3).

Microsoft Hybrid Cloud for Enterprise Architects

Also see the other posters in the Microsoft Cloud for Enterprise Architects Series.

Microsoft Cloud Services and Platform Options poster

Microsoft Cloud Services and Platform Options

http://bit.ly/1WthBzh

Microsoft Cloud Identity for Enterprise Architects

Microsoft Cloud Identity for Enterprise Architects

http://bit.ly/1WthBPu

Microsoft Cloud Security for Enterprise Architects

Microsoft Cloud Security for Enterprise Architects

http://bit.ly/1WthDXL

Microsoft Cloud Networking for Enterprise Architects

Microsoft Cloud Networking for Enterprise Architects

http://bit.ly/1WthDXN

Microsoft Cloud Storage for Enterprise Architects

Microsoft Cloud Storage for Enterprise Architects

http://bit.ly/1WthDXO

 

To see all the resources for Microsoft cloud platforms and services, see Microsoft’s Enterprise Cloud Roadmap.

Note to Twitter users: If you tweet about this poster or any others in the series, please use the #mscloudarch. Thanks!

Windows 10 won’t let you share WiFi passwords any more

The content below is taken from the original (Windows 10 won’t let you share WiFi passwords any more), to continue reading please visit the site. Remember to respect the Author & Copyright.

Remember Microsoft’s WiFi Sense? One of its cornerstones is the ability to share password-protected WiFi networks with contacts, saving them the hassle of logging in when they visit. Unfortunately, though, there weren’t many people enamored with the idea. Microsoft has pulled WiFi Sense’s contact sharing its latest Windows 10 Insider preview build after noting that it wasn’t worth the effort given "low usage and low demand." It’ll remain intact on slower Insider builds and regular Windows 10 releases for now, but it should disappear for everyone when the Anniversary Update hits in the summer.

This doesn’t mean that all of WiFi Sense is going away. It’ll still automatically connect you to public hotspots based on crowdsourced data, so you’re safe if you primarily use the feature to get online at airports and coffee shops. Even so, it’s hard to avoid that bittersweet feeling: while it’s good to see Microsoft pruning features people don’t use, the decision makes Windows 10 a little more inconvenient.

Via: The Verge

Source: Windows Experience Blog

Azure for Developers: Download Free eBook from O’Reilly

The content below is taken from the original (Azure for Developers: Download Free eBook from O’Reilly), to continue reading please visit the site. Remember to respect the Author & Copyright.

Azure for Developers: Download Free eBook from O’Reilly

Some of you are sure to find this free ebook Azure for Developers to be of immense interest, as it talks about all that the opportunities that Microsoft Azure has to offer to you as a developer.

azure for developers

Azure for Developers free eBook

Microsoft’s Azure platform offers a lot of functions & features – Cloud hosting, web hosting, data analytics, data storage, machine learning, and more, all of which have been integrated with Visual Studio, the tool that .NET developers already know.



With such a large number of offerings, it can be daunting to know where to start. In this O’Reilly report, .NET developer John Adams breaks down the options in plain language so that you can quickly get up to speed on Microsoft Azure.

So if you want to know what Azure offers for your next project, or if you need to convince your management to go with Azure, you definitely want to download this free eBook as it has the information you require in a nutshell.

Click here to visit its download page. You will be required to submit your email ID and other information. When you do this, you will receive its download link via email. You may also receive regular weekly newsletters from O’Reilly Media.

If you are looking for more free eBooks, downloads or other freebies, go visit this link and see if anything interests you.



Anand Khanse aka HappyAndyK is an end-user Windows enthusiast, a Microsoft MVP in Windows, since 2006, and the Admin of TheWindowsClub.com. Please read the entire post & the comments first, create a System Restore Point before making any changes to your system & be careful about any 3rd-party offers while installing freeware.