🎉🤖 PRODUCT LAUNCH NEWS 🎉🤖 Launch of Skynet Homescreen! 🚀DeFi needs decentralized front-ends and we’re proud to launch the FIRST and ONLY dashboard for managing decentralized apps in one place in-browser.

The content below is taken from the original ( 🎉🤖 PRODUCT LAUNCH NEWS 🎉🤖 Launch of Skynet Homescreen! 🚀DeFi needs decentralized front-ends and we’re proud to launch the FIRST and ONLY dashboard for managing decentralized apps in one place in-browser.), to continue reading please visit the site. Remember to respect the Author & Copyright.

🎉🤖 PRODUCT LAUNCH NEWS 🎉🤖 Launch of Skynet Homescreen! 🚀DeFi needs decentralized front-ends and we're proud to launch the FIRST and ONLY dashboard for managing decentralized apps in one place in-browser. submitted by /u/SkynetLabs to r/siacoin
[link] [comments]

London Show set to go ahead as an in-person event

The content below is taken from the original ( London Show set to go ahead as an in-person event), to continue reading please visit the site. Remember to respect the Author & Copyright.

With last year’s London Show becoming a virtual event because of the pandemic and associated restrictions, the RISC OS User Group of London (ROUGOL) has announced that this year’s event… Read more »

Cisco announces new partner platform functions at roundtable event

The content below is taken from the original ( Cisco announces new partner platform functions at roundtable event), to continue reading please visit the site. Remember to respect the Author & Copyright.

Cisco announces new partner platform functions at roundtable event

Cisco held a conference call for press and analysts last week to discuss changes and evolvements in the vendor’s Partner Experience Platform (PXP) and PX and CX Cloud, while also highlighting some key areas where partners are establishing growth.

On the call was SVP of global partner sales Oliver Tuszik, VP of partner performance Jose van Dijk and SVP of customer and partner experience engineering Tony Colon, who each shared their insights into the partner landscape at Cisco and the company’s key areas of focus and improvement moving forward.

That includes new functionalities in the PXP such as AI-infused sales and planning tools for partners and the planned full launch of PX Cloud – a platform which provides partners with information and insights about their customers.

Here are the three main areas Cisco’s channel bosses chose to focus on during the press and analyst roundtable event…

 

Partner influence growing

Sharing updates on how Cisco’s partners were performing, Tuszik said the company’s partners are “leading the shift to software” and are continuing to grow their own recurring revenue in the process.

“The indirect partner share keeps growing. Where people say it’s going more direct, more digital, we are seeing fact-based bookings where our indirect sales via our partners are getting an even bigger share,” he said.

Tuszik pointed to the rise in partners selling more adoption services and increasing renewals rates, citing research from the company which found those figures had risen by 33 per cent and 40 per cent year-on-year respectively.

He also claimed distribution has contributed more than 50 per cent to Cisco’s growth since FY2014 – admitting this figure had “surprised” him.

“Routes to market are the most important expansion we need to drive right now,” Tuszik added.

“Customers want to reduce complexity. They no longer want to build or manage something on their own, they want somebody to deliver it as a service.”

PXP developments

Cisco launched PXP a year ago as part of its drive to create a simplified partner programme which Tuszik said at the time would “serve partners in a more flexible and agile way than ever before”.

Providing an update on the progress of PXP, Van Dijk said Cisco is targeting retiring “50 per cent” of the 180 tools partners used prior to PXP’s launch, with the current figure standing at “around 32 per cent”.

She announced several developments to PXP – the first of which is the introduction of a feature called Clair (customer lens for actionable insights and recommendations), which uses “ML and AI developed in house by Cisco” and is “based on 10 years of customer data and customer buying habits”.

Van Dijk claimed the tool would help partners to target the best opportunities by “segments, renewal, enterprise categories and sales plays” as just a few examples, and announced that it will become available for all partners in the second half of this year.

Also being introduced is an Integrated Partner Plan (IPP) which Van Dijk described as “a globally consistent plan” which would create “better alignment between the Cisco partner teams as well as the partner executives”.

“We are going to make sure that there is ongoing performance rankings or smart goals and KPIs that leverage all of the data that we have in PXP,” she added.

Thirdly, Cisco have introduced new capabilities in its sales opportunities segment – formerly known as renewals – which “lets partners see what the top line opportunities are, across all the different sales margins, as well as the performance on self-service metrics”.

“That of course, immediately impacts rebates and goes straight into the bottom line of the partners. We’ve added booking benchmarking information so partners can understand how they’re performing, relative to the peer group,” Van Dijk explained.

“And then, of course, we’re adding a new programme and new metric benchmarking information to provide a better context for prioritisation and decision making for our partners as well.”

And finally, Cisco has expanded PXP’s collaboration features through Partner Connect – which matches partners together “to help uncover and develop new buying centres and create valuable connections”.

PX Cloud and CX Cloud

Cisco also provided a demo for its PX Cloud, which is currently operating under limited availability but is set to become available for all partners “around the time” of Partner Summit.

“The partner experience platform is a full-fledged house, and there’s multiple tenants within that house,” Colon explained.

“And so, what we have in one of those tenants, or one of those floors, is something that we call the partner experience cloud. This is where customers get access to the telemetry data at its core.”

The PX Cloud provides partners with access to key information about their Cisco customers, which Colon claimed would provide “the full feedback on what is relevant to them”.

Key features of the PX Cloud includes a single dashboard which “tracks offer engagement, customer portfolios and progress” and provides an “enhanced contract view to enable partners to identify expiring contracts and asset details”.

Cisco also claims the PX Cloud will “enable partners to quickly create targeted offers for specific customers” as well as “receive customer interests and feedback details”.

Meanwhile, the customer experience (CX) Cloud is the “close relative” of the PX Cloud and allows the customers of partners to view “telemetry data, assets, contracts and licences”.

To the cloud and beyond! Migration Enablement with Google Cloud’s Professional Services Organization

The content below is taken from the original ( To the cloud and beyond! Migration Enablement with Google Cloud’s Professional Services Organization), to continue reading please visit the site. Remember to respect the Author & Copyright.

Google Cloud’s Professional Services Organization (PSO) engages with customers to ensure effective and efficient operations in the cloud, from the time they begin considering how cloud can help them overcome their operational, business or technical challenges, to the time they’re looking to optimize their cloud workloads. 

We know that all parts of the cloud journey are important and can be complex.  In this blog post, we want to focus specifically on the migration process and how PSO engages in a myriad of activities to ensure a successful migration.

As a team of trusted technical advisors, PSO will approach migrations in three phases:

  1. Pre-Migration Planning
  2. Cutover Activities
  3. Post-Migration Operations

While this post will not cover in detail all of the steps required for a migration, it will focus on how PSO engages in specific activities to meet customer objectives, manage risk, and deliver value.  We will discuss the assets, processes and tools that we leverage to ensure success.

Pre-Migration Planning

Assess Scope

Before the migration happens, you will need to understand and clarify the future state that you’re working towards.  From a logistical perspective, PSO will be helping you with capacity planning to ensure sufficient resources are available for your envisioned future state.

While migration into the cloud does allow you to eliminate many of the considerations for the physical, logistical, and financial concerns of traditional data centers and co-locations, it does not remove the need for active management of quotas, preparation for large migrations, and forecasting.  PSO will help you forecast your needs in advance and work with the capacity team to adjust quotas, manage resources, and ensure availability. 

Once the future state has been determined, PSO will also work with the product teams to determine any gaps in functionality.  PSO capturesfeature requests across Google Cloud services and makes sure they are understood, logged, tracked, and prioritized appropriately with the relevant product teams.  From there, they work closely with the customer to determine any interim workarounds that can be leveraged while waiting for the feature to land, as well as providing updates on the upcoming roadmap.  

Develop Migration Approach and Tooling

Within Google Cloud, we have a library of assets and tools we use to assist in the migration process.  We have seen these assets help us successfully complete migrations for other customers efficiently and effectively.

Based on the scoping requirements and tooling available to assist in the migration, PSO will help recommend a migration approach.  We understand that enterprises have specific needs; differing levels of complexity and scale; regulatory, operational, or organization challenges that will need to be factored into the migration.  PSO will help customers think through the different migration options and how all of the considerations will play out.

PSO will work with the customer team to determine the best migration approach for moving servers from on-prem to Google Cloud. PSO will walk customers through different migration approaches, such as refactoring, lift-shift, or new installs. From there, the customer can determine the best fit for their migration. PSO will provide guidance on best practices and use cases from other customers with similar use cases. 

Google offers a variety of cloud native tools that can assist with asset discovery, the migration itself, and post-migration optimization. PSO, as one example, will help work with project managers to determine the best tooling that accommodates the customer’s requirements for migrating servers. PSO will also engage Google product team to ensure the customer fully understands the capabilities of each tool and the best fit for the use case. Google understands from a tooling perspective, one size does not fit all, thus PSO will work with the customer on determining the best migration approach and tooling for different requirements. 

Cutover Activities

Once all of the planning activities have been completed, PSO will assist in making sure the cutover is successful.

During and leading up to critical customer events, PSO can provide proactive event management services which deliver increased support and readiness for key workloads.  Beyond having a solid architecture and infrastructure on the platform, support for this infrastructure is essential and TAMs will help ensure that there are additional resources to support and unblock the customer where challenges arise.

As part of event management activities, PSO liaises with the Google Cloud Support Organization to ensure quick remediation and high resilience for situations where challenges arise.  A war room is usually created to facilitate quick communication about the critical activities and roadblocks that arise.  These war rooms can give customers a direct line to the support and engineering teams that will triage and resolve their issues.

Post-Migration Activities

Once cutover is complete, PSO will continue to provide support in areas such incident management, capacity planning, continuous operational support, and optimization to ensure the customer is successful from start to finish.

PSO will serve as the liaison between the customer and Google engineers. If support cases need to be escalated, PSO will ensure the appropriate parties are involved and work to get the case resolved in a timely manner. Through operational rigor, PSO will work with the customer in determining if certain Google Cloud services will be beneficial to the customer objectives. If services will add value to the customer, PSO will help enable the services so it aligns with the customer’s goal and current cloud architecture. In cases where there are missing gaps in services, PSO will proactively work with the customer and Google engineering teams to close the gaps by enabling additional functionality in the services.  

PSO will continue to work with the engineering teams to consistently review and provide recommendations on the customer’s cloud architecture in ensuring the most optimal and cost efficient design along with adhering to Google’s best practices guidelines. 

Aside from migrations, PSO is also responsible for providing continuous training of Google Cloud to customers. To ensure consistent development of Google Cloud, PSO will work with the customer to jointly develop a learning roadmap to ensure the customer has the necessary skills to succeed in delivering successful projects in Google Cloud.

Conclusion

Google PSO will be actively engaged throughout the customer’s cloud journey to ensure the necessary guidance, methodology, and tools are presented to the customer. PSO will engage in a series of activities from pre-migration planning to post migration in key areas such as capacity planning to ensure sufficient resources are allocated for future workloads to providing support on technical cases for troubleshooting. PSO will serve as a long-term trusted advisor who will be the voice of the  customer and provide the reliability and stability of the customer’s Google Cloud environment.

Click here if you’d like to engage with our PSO team on your migration. Or, you can also get started with a free discovery and assessment of your current IT landscape.


Reference material

General availability: Azure Files supports storage capacity reservations for premium, hot, and cool tiers

The content below is taken from the original ( General availability: Azure Files supports storage capacity reservations for premium, hot, and cool tiers), to continue reading please visit the site. Remember to respect the Author & Copyright.

Storage capacity reservations for Azure Files enable you to significantly reduce the total cost of ownership of storage by pre-committing to storage utilization. To achieve the lowest costs in Azure, you should consider reserving capacity for all production workloads.

When a cloud provider retires a service you’re using

The content below is taken from the original ( When a cloud provider retires a service you’re using), to continue reading please visit the site. Remember to respect the Author & Copyright.

Over the years I’ve had a few friends and clients reach out to me, unhappy about some public cloud services being removed, either from a name-brand cloud provider or secondary players. At times, entire clouds were being shut down.

The cloud providers typically give plenty of notice (sometimes years), calling the service “legacy” or “classic” for a time. They’ll have a migration tool and procedures to move to other similar services, sometimes to competitors. In some cases, they will pay for consultants to do it for you.

As a tech CTO for many years, I also had to sunset parts or all of technologies we sold. This meant removing support and eventually making the technology no longer viable for the customer. Again, this was done with plenty of notice, providing migration tools and even funding to make the move to more modern and likely better solutions.

To read this article in full, please click here

Access free training and learn how to automate hyperparameter tuning to find the best model

The content below is taken from the original ( Access free training and learn how to automate hyperparameter tuning to find the best model), to continue reading please visit the site. Remember to respect the Author & Copyright.

In today’s post, we’ll walk through how to easily create optimal machine learning models with BigQuery ML’s recently launched automated hyperparameter tuning. You can also register for our free training on August 19 to gain more experience with hyperparameter tuning and get your questions answered by Google experts. Can’t attend the training live? You can watch it on-demand after August 19.  

Without this feature, users have to manually tune hyperparameters by running multiple training jobs and comparing the results. The efforts might not even work without knowing the good candidates to try out.

With a single extra line of SQL code, users can tune a model and have BigQuery ML automatically find the optimal hyperparameters. This enables data scientists to spend less time manually iterating hyperparameters and more time focusing on unlocking insights from data. This hyperparameter tuning feature is made possible in BigQuery ML by using Vertex Vizier behind-the-scenes.  Vizier was created by Google research and is commonly used for hyperparameter tuning at Google.

BigQuery ML hyperparameter tuning helps data practitioners by:

  • Optimizing model performance with one extra line of code to automatically tune hyperparameters, as well as customizing the search space
  • Reducing manual time spent trying out different hyperparameters
  • Leveraging transfer learning from past hyperparameter-tuned models to improve convergence of new models

How do you create a model using Hyperparameter Tuning?

You can follow along in the code below by first bringing the relevant data to your BigQuery project. We’ll be using the first 100K rows of data from New York taxi trips that is part of the BigQuery public datasets to predict the tip amount based on various features, as shown in the schema below:

schema

First create a dataset, bqml_tutorial in the United States (US) multiregional location, then run:

Without hyperparameter tuning, the model below uses the default hyperparameters, which may very likely not be ideal. The responsibility falls on data scientists to train multiple models with different hyperparameters, and compare evaluation metrics across all the models. This can be a time-consuming process and it can become difficult to manage all the models. In the example below, you can train a linear regression model, using the default hyperparameters, to try to predict taxi fares.

With hyperparameter tuning (triggered by specifying NUM_TRIALS), BigQuery ML will automatically try to optimize the relevant hyperparameters across a user-specified number of trials (NUM_TRIALS). The hyperparameters that it will try to tune can be found in this helpful chart.

In the example above, with NUM_TRIALS=20, starting with the default hyperparameters, BigQuery ML will try to train model after model while intelligently using different hyperparameter values — in this case, l1_reg and l2_reg as described here. Before training begins, the dataset will be split into three parts1: training/evaluation/test. The trial hyperparameter suggestions are calculated based upon the evaluation data metrics. At the end of each trial training, the test set is used to evaluate the trial and record its metrics in the model. Using an unseen test set ensures the objectivity of the test metric reported at the end of tuning.

The dataset is split into 3-ways by default when hyperparameter tuning is enabled. The user can choose to split the data in other ways as described in the documentation here.

We also set max_parallel_trials=2 in order to accelerate the tuning process. With 2 parallel trials running at any time, the whole tuning should take roughly as long as 10 serial training jobs instead of 20.

Inspecting the trials 

How do you inspect the exact hyperparameters used at each trial? You can use ML.TRIAL_INFO to inspect each of the trials when training a model with hyperparameter tuning.

Tip: You can use ML.TRIAL_INFO even while your models are still training.

model

In the screenshot above, ML.TRIAL_INFO shows one trial per row, with the exact hyperparameter values used in each trial. The results of the query above indicate that the 14th trial is the optimal trial, as indicated by the is_optimal column. Trial 14 is optimal here because the hparam_tuning_evaluation_metrics.r2_score — which is R2 score for the evaluation set — is the highest. The R2 score improved impressively from 0.448 to 0.593 with hyperparameter tuning!

Note that this model’s hyperparameters were tuned just by using num_trials and max_parallel_trials, and BigQuery ML searches through the default hyperparameters and default search spaces as described in the documentation here. When default hyperparameter search spaces are used to train the model, the first trial (TRIAL_ID=1) will always use default values for each of the default hyperparameters for the model type LINEAR_REG. This is to help ensure that the overall performance of the model is no worse than a non-hyperparameter tuned model.

Evaluating your model

How well does each trial perform on the test set? You can use ML.EVALUATE, which returns a row for every trial along with the corresponding evaluation metrics for that model.

evaluation

In the screenshot above, the columns “R squared” and “R squared (Eval)” correspond to the evaluation metrics for the test and evaluation set, respectively. For more details, see the data split documentation here.

Making predictions with your hyperparameter-tuned model

How does BigQuery ML select which trial to use to make predictions? ML.PREDICT will use the optimal trial by default and also returns which trial_id was used to make the prediction. You can also specify which trial to use by following the instructions.

Cutomizing

Customizing the search space

There may be times where you want to select certain hyperparameters to optimize or change the default search space per hyperparameter. To find the default range for each hyperparameter, you can explore the Hyperparameters and Objectives section of the documentation.

hyperparameters

For LINEAR_REG, you can see the feasible range  for each hyperparameter. Using the documentation as reference, you can create your own customized CREATE MODEL statement:

Transfer learning from previous runs

If this isn’t enough, hyperparameter tuning in BigQuery with Vertex Vizier running behind the scenes means you also get the added benefit of transfer learning between models that you train, as described here

How many trials do I need to tune a model?

The rule of thumb is at least 10 * the number of hyperparameters, as described here (assuming no parallel trials). For example, LINEAR_REG will tune 2 hyperparameters by default, and so we recommend using NUM_TRIALS=20.

Pricing

The cost of hyperparameter tuning training is the sum of all executed trials costs, which means that if you train a model with 20 trials, the billing would be equal to the total cost across all 20 trials. The pricing of each trial is consistent with the existing BigQuery ML pricing model.

Note: Please be aware that the costs are likely going to be much higher than training one model at a time.

Exporting hyperparameter-tuned models out of BigQuery ML

If you’re looking to use your hyperparameter-tuned model outside of BigQuery, you can export your model to Google Cloud Storage, which you can then use to, for example, host in a Vertex AI Endpoint for online predictions.

Summary

With automated hyperparameter tuning in BigQuery ML, it’s as simple as adding one extra line of code (NUM_TRIALS) to easily improve model performance! Ready to get more experience with hyperparameter tuning or have questions you’d like to ask? Sign up here for our no-cost August 19 training.

Related Article

Distributed training and Hyperparameter tuning with TensorFlow on Vertex AI

Learn how to configure and launch a distributed hyperparameter tuning job with Vertex Training using bayesian optimization.

Read Article

ReactOS Is Going Places, With More Stable AMD64, SMP, And Multi-Monitor Support

The content below is taken from the original ( ReactOS Is Going Places, With More Stable AMD64, SMP, And Multi-Monitor Support), to continue reading please visit the site. Remember to respect the Author & Copyright.

In the crowd of GNU/Linux and BSD users that throng our community, it’s easy to forget that those two families are not the only games in the open-source operating system town. One we’ve casually kept an eye on for years is ReactOS, the long-running open-source Windows-compatible operating system that is doing its best to reach a stable Windows XP-like experience. Their most recent update has a few significant advances mentioned in it that hold the promise of it moving from curiosity to contender, so is definitely worth a second look.

ReactOS has had 64-bit builds for a long time now, but it appears they’ve made some strides in both making them a lot more stable, and moving away from the MSVC compiler to GCC. Sadly this doesn’t seem to mean that this now does the job of a 64-bit Windows API, but it should at least take advantage internally of the 64-bit processors. In addition they have updated their support for the Intel APIC that is paving the way for ongoing work on multiprocessor support where their previous APIC driver couldn’t escape the single processor constraint of an original Intel 8259.

Aside from these its new-found support for multiple monitors should delight more productive users, and its improved support for ISA plug-and-play cards will be of interest to retro enthusiasts.

We took a close look at the current ReactOS release when it came out last year, and concluded that its niche lay in becoming a supported and secure replacement for the many legacy Windows XP machines that are still hanging on years after that OS faded away. We look forward to these and other enhancements in their next release, which can’t be far away.

Zero trust with reverse proxy

The content below is taken from the original ( Zero trust with reverse proxy), to continue reading please visit the site. Remember to respect the Author & Copyright.

A reverse proxy stands in front of your data, services, or virtual machines, catching requests from anywhere in the world and carefully checking each one to see if it is allowed.

In order to decide (yes or no) the proxy will look at who and what.

Who are you (the individual making the request)? What is your role? Do you have access permission (authorization)?

What device are you using to make the request? How healthy is your device right now? Where are you located? 

At what time are you making the request?

This issue of GCP Comics presents an example of accessing some rather confidential data from an airplane, and uses that airplane as a metaphor to explain what the proxy is doing.

Zero trust
Click to enlarge

Reverse proxies work as part of the load balancing step when requests are made to web apps or services, and they can be thought of as another element of the network infrastructure that helps route requests to the right place. No one can access your resources unless they meet certain rules and conditions.

If a request is invalid or doesn’t meet the necessary criteria set by your administrators, either because it is from an unauthorized person or an unsafe device, then the proxy will deny the request.

Why might the proxy say no to my request? When assessing the user making the request, denial of access could be due to reasons such as:

  • I’m in Engineering, but I am trying to access Finance data.
  • I’m not even a part of the company.
  • My job changed, and I lost access.

Looking at the device originating the request, the proxy could deny access due to a number of factors, such as:

  • Device operating system out of date
  • Malware detected
  • Device is not reporting in
  • Disk encryption missing
  • Device doesn’t have screen lock

Leveraging identity and device information to secure access to your organization’s resources improves your security posture.

Resources

To learn more about proxies and Zero Trust, check out the following resources:

Want more GCP Comics? Visit gcpcomics.com & follow us on medium pvergadia & max-saltonstall, and on Twitter at @pvergadia and @maxsaltonstall. Be sure not to miss the next issue!

Related Article

What is zero trust identity security?

A zero trust network is one in which no person, device, or network enjoys inherent trust. All trust, which allows access to information, …

Read Article

Some Components for PowerShell Universal

The content below is taken from the original ( Some Components for PowerShell Universal), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hi,
I have done some components for PowerShell Universal that are mostly doing things on remote Windows Clients. It’s perfect if you want to add them to your support page or similar for the IT-Support.

I’m working to do my AD stuff universal also and ofc add some more things to this Repo.

Anyway I just want to share and if you want to use it feel free to do so, if you want to make some PR etc. you can also do that 🙂

Here is the repo.

https://github.com/rstolpe/PSU-Random

submitted by /u/rstolpe to r/PowerShell
[link] [comments]

tf-free: A project to create free resources on all cloud-providers

The content below is taken from the original ( tf-free: A project to create free resources on all cloud-providers), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hey r/DevOps,

I’ve created a project where you can create all the free resources available from the major cloud providers in a single command. I used it as a way to learn infrastructure as a code without relying on companies or external training, and without spending fortunes as well. Furthermore, I hope it helps you, as it did for me.

You are welcome to contribute and ask questions, it is not finished by any means, so be aware you must learn the basics before messing with its configurations.

submitted by /u/tsyklon_ to r/devops
[link] [comments]

Custom Instrument Cluster for Aging Car

The content below is taken from the original ( Custom Instrument Cluster for Aging Car), to continue reading please visit the site. Remember to respect the Author & Copyright.

All of the technological improvements to vehicles over the past few decades have led to cars and trucks that would seem borderline magical to anyone driving something like a Ford Pinto in the 1970s. Not only are cars much safer due to things like crumple zones, anti-lock brakes, air bags, and compulsory seat belt use, but there’s a wide array of sensors, user interfaces, and computers that also improve the driving experience. At least, until it starts wearing out. The electronic technology in our modern cars can be tricky to replace, but [Aravind] at least was able to replace part of the instrument cluster on his aging (yet still modern) Skoda and improve upon it in the process.

These cars have a recurring problem with the central part of the cluster that includes an LCD display. If replacement parts can even be found, they tend to cost a significant fraction of the value of the car, making them uneconomical for most. [Aravind] found that a 3.5″ color LCD that was already available fit perfectly in the space once the old screen was removed, so from there the next steps were to interface it to the car. These have a CAN bus separated from the main control CAN bus, and the port was easily accessible, so an Arduino with a RTC was obtained to handle the heavy lifting of interfacing with it.

Now, [Aravind] has a new LCD screen in the console that’s fully programmable and potentially longer-lasting than the factory LCD was. There’s also full documentation of the process on the project page as well, for anyone else with a Volkswagen-adjacent car from this era. Either way, it’s a much more economical approach to replacing the module than shelling out the enormous cost of OEM replacement parts. Of course, CAN bus hacks like these are often gateway projects to doing more involved CAN bus projects like turning an entire vehicle into a video game controller.

Microsoft’s Cloud PCs debut – priced between $20 and $158 a month

The content below is taken from the original ( Microsoft’s Cloud PCs debut – priced between $20 and $158 a month), to continue reading please visit the site. Remember to respect the Author & Copyright.

We tried ’em on Windows, iOS and Android, and can’t say they’re very exciting

First Look Microsoft has revealed the full range of options and pricing for its Windows 365 Cloud PCs, and The Register is not impressed – on price or performance.…

‘It’s the right thing to do, which is why it’s free’ – new ‘Carbon’ platform aims to give sustainability leg-up to 1,000 UK resellers

The content below is taken from the original ( ‘It’s the right thing to do, which is why it’s free’ – new ‘Carbon’ platform aims to give sustainability leg-up to 1,000 UK resellers), to continue reading please visit the site. Remember to respect the Author & Copyright.

'It's the right thing to do, which is why it's free' - new 'Carbon' platform aims to give sustainability leg-up to 1,000 UK resellers

A consultancy run by a gang of former HP execs is aiming to give the mass UK channel a leg-up on sustainability through its new platform.

Headed up by a quartet of HP alumni including former UK channel boss Trevor Evans, Consenna today unveiled ‘Carbon’, a free-to-use platform featuring a menu of self-serve, sustainability-focused marketing campaigns, training and education.

Evans joined Consenna as MD in 2019 after a spell at Apple, in the process reuniting with former HP colleagues Douglas Jeffrey and Paul Thompson, the former of whom founded Consenna in 2009. Another HP alumni, Simon Yates, has since joined in the role of product management director.

Talking to CRN, Evans said that Carbon is designed to appeal to smaller resellers who lack the inhouse clout to respond to rocketing customer demand for sustainable IT solutions. He cited research suggesting that at least 60 per cent of customers are now willing to pay more for a sustainable product or service.

“What we’re trying to do with Carbon is make it possible for every partner, in short order, to have products, services, campaigns, collateral, training – all the things that ordinarily a customer would ask for – at their fingertips in an unbiased way,” he said.

“It will appeal to companies who know their customers are asking for [sustainable IT solutions], and know their competition can offer it, but don’t know where to go in their company to provide it. We’d like to almost be that extended person in their office sat virtually at the click of a mouse. That’s our vision.”

Carbon will point reseller sales staff towards solutions in areas such as carbon offsetting and recycling, Evans indicated.

“It will have some go-to-places for a channel sales person to respond to a customer and say ‘I can provide this sustainability thread to the products I’m providing. I can tell you their carbon impact and give you a way to offset it, if that’s what you want to do’.

“We know there are some relatively good practices emerging in some areas of the channel to promote that, but we want to be able to offer it to the broader spectrum. Why should a particular reseller be at a disadvantage in their portfolio offerings just because they don’t know where to go?”

Consenna is aiming to enlist at least 1,000 resellers to Carbon by the end of the year. A distributor is likely to be recruited in the coming days, Evans added.

Positioning itself as a “provider of products and services to vendors that enable them to better leverage the channel”, Consenna’s headcount has risen from three to nearly 20 in the space of two years, with Evans brought on to help expand its repertoire.

It counts vendors such as HP, Lenovo, Microsoft and Fujitsu as its customers, but Carbon is both free and vendor-agnostic, Evans stressed.

“The reason Carbon evolved is because it’s the right thing to do, which is why it’s free. We feel strongly this is something our industry needs to address and we want to play a small part in that,” he said.

Hands On With the Raspberry Pi POE+ HAT

The content below is taken from the original ( Hands On With the Raspberry Pi POE+ HAT), to continue reading please visit the site. Remember to respect the Author & Copyright.

There’s a lot happening in the world of Pi. Just when we thought the Raspberry Pi Foundation were going to take a break, they announced a new PoE+ HAT (Hardware Attached on Top) for the Pi B3+ and Pi 4, and just as soon as preorders opened up I placed my order.

Now I know what you’re thinking, don’t we already have PoE HATs for the Pis that support it? Well yes, the Pi PoE HAT was released back in 2018, and while there were some problems with it, those issues got cleared up through a recall and minor redesign. Since then, we’ve all happily used those HATs to provide up to 2.5 amps at 5 volts to the Pi, with the caveat that the USB ports are limited to a combined 1.2 amps of current.

PoE vs PoE+
$20 for either of them. Choose wisely.

The Raspberry Pi 4 came along, and suddenly the board itself can pull over 7 watts at load. Combined with 6 watts of power for a hungry USB device or two, and we’ve exceeded the nominal 12.5 watt power budget. As a result, a handful of users that were trying to use the Pi 4 with POE were hitting power issues when powering something like dual SSD drives over USB. The obvious solution is to make the PoE HAT provide more power, but the original HAT was already at the limit of 802.3af PoE could provide, with a maximum power output of 12.95 watts.

The solution the Raspberry Pi Foundation came up with was to produce a new product, the PoE+ HAT, and sell it along side the older HAT for the same $20. The common name for 802.3at is “PoE+”, which was designed specifically for higher power devices, maxing out at 30 watts. The PoE+ HAT is officially rated to output 20 watts of power, 5 volts at 4 amps. These are the output stats, so the efficiency numbers don’t count against your power budget, and neither does the built-in fan.

More Watts Than We Bargained For

The official specs don’t tell the full story, evidenced by the initial announcement that claimed 5 amps instead of 4. That discrepancy bugged me enough, I reached out to the man himself, CEO [Eben Upton]. The head honcho confirmed:

The spec is that it will supply 20W, but it’s been designed to 25W to give us some engineering margin

So if you want to be super conservative, and ensure the longest possible life, keep your power draw at or under 20 watts. I tested the HAT to the point where it gave up, and not to let the cat out of the bag, 25 watts is still a bit conservative. More on that later.


We know there’s a lot of available power here, but it’s not exactly easy to get to. For instance, the Pi 4 can push up to 1.2 Amps of power through the USB ports. At 5 volts, that’s only 6 watts of power, where’s the rest? In theory there’s a simple answer, as the HAT delivers power back through the 5v GPIO pins. All we need to do is jumper on to those pins and… Those pins don’t protrude through the HAT at all.

Really an amateur job, but it works!

I would have loved to see an official solution to make the GPIO pins accessible with the HAT on, and not a inelegant solution like using those hokey pin extenders that were recommended for the original PoE HAT. Are we foiled, then? Nope. You see, there’s a good 1/4 inch of GPIO pin visible between the Pi and the HAT. It’s just enough room for a good old fashioned wire-wrapped connection, along with some solder for safety.

OK, now we have access to more than 6 watts of power. There are two obvious questions: How much power, and what can we do with it? To kill two birds with one proverbial stone, I grabbed a string of RGB LEDs and wired the voltage supply directly into the 5v rail. The PoE+ HAT has a wonderful feature — it adds a sysnode that tells you exactly how much current the HAT is providing. cat /sys/devices/platform/[email protected]/power_supply/rpi-poe/current_now

For testing the HAT, I invented a new unit of measure, the Cyberpunk Neon-purple Pixel. I used the PoE+ HAT to measure the power consumed by the Pi and Pixels, also recorded the power use reported by the PoE switch, and used a non-contact IR thermometer to find the hottest point on the HAT after a few minutes of powering the LED strip.

I repeated the experiment with the original PoE HAT, and you can review my raw results if you’d like. There are a couple minor caveats, mostly related to temperature measurement. My IR Thermometer doesn’t provide the rich data that a full IR camera does. Additionally, I was limited to measuring just one side of the PoE boards. I believe that the hottest spots on the original PoE HAT are on the underside of the board, while on the new HAT, seem to be on the side facing away from the Pi — that’s a win in itself. All that to say, my temperature measurements of the original HAT are probably quite a bit too low.

More Launch Problems?

So remember how the first iteration of the PoE HAT had some problems? The big one was that some USB devices could trip the over-current protection at much lower levels than they should have. There was the additional issue of the board getting ridiculously hot at full load. There have been reports of the PoE+ version having some similar launch warts. The problems that have been identified are: high temperature, high power draw from the HAT itself at idle, the 1.2 amp USB limit, a long bolt that contacts the camera connector, a louder fan, and odd behavior when powering the Pi and HAT over the USB C connector. I’ll step through these one at a time. These are legitimate concerns, and I’m not necessarily here to debunk them, but I will put them in context of my own testing. Edit: Shoutout to Jeff Geerling and Martin Rowan, linked above and below, for their early work on reviewing the PoE+ HAT.

First up is temperature. The PoE+ HAT measures nearly 52°C at idle, at its hottest measured point. That is quite warm, and is hotter than the 44.5°C I observed on the original PoE HAT under similar conditions. This seems to be in contention with what [Eben] had to say about temperatures:

Thanks to improved thermal design it should run cooler (measured at the hottest point on an uncased board) at any load.

I can think of one explanation that satisfies all the observations. The original HAT’s hottest point is between the HAT and the Pi itself. This is observable in the EEVBlog video linked above. I tested with the HATs installed on the Pis, making it essentially impossible to get a reading on the underside. Setting that explanation aside, my measurements indicated that the original HAT got very hot at higher power outputs, while the PoE+ HAT stayed quite stable. Above 7 watts of power output, the new HAT ran cooler as per my measurements.

The PoE+ HAT pulls 4.9+ watts of power to run an idling Raspberry Pi 4. The original HAT does the same thing for as little as 2.9 watts. At low power levels, the original HAT is definitely more efficient. The difference is that the original HAT runs at about 78% efficiency no matter how much power is being drawn, while the new PoE+ HAT can be as much as 88% efficient at higher power levels. The crossover point is somewhere between 1.5 and 2 amps of output. If power efficiency is of concern, you might want to stick with the original HAT for lower power use.

The USB ports on the Pis only supply 1.2 amps. This is annoying, but isn’t a weakness of the PoE HAT at all. We can hope for a future Pi revision that raises that limit. Until then, the workaround of tapping power directly from the 5v rail works nicely.

As for the long bolt, I’ll let [Eben]’s response speak for itself:

A number of people have found that the bolt touches, but does not damage, their camera connector. We’re likely to back it off to an 11mm bolt (10mm, as has been suggested in one or two places, is definitely too short) in a future production run.

The fan is louder at full speed, but quieter at its lowest speed. Additionally, it moves more air at full speed, 2.4 CFM compared to 2.2 from the older hardware. With a few tweaks to the fan’s trigger temperatures, the new fan can be quite a bit quieter overall. Just a note, if you have the PoE+ HAT, and the fan isn’t spinning at all, you probably need to pull the latest updates for the Raspberry Pi OS, as the enablement code has landed quite recently.

The final complaint is that the PoE+ HAT doesn’t properly block backfed power when it’s left on a Pi powered via the USB C plug. There is an annoying coil wine, and the HAT actually powers the high voltage side of its power supply circuit. This is obviously not ideal behavior. It would have been nice to have the backfeed protection, but the official documentation does address this: “When the Raspberry Pi PoE+ HAT is connected to your Raspberry Pi, your Raspberry Pi should only be powered through the ethernet cable. Do not use any additional method to power the Raspberry Pi.”

How Much Power

Cyberpunk Purple Pixels

Once I had my cyberpunk lighting rig set up, I thought it would be useful to find the hard limits and see how many pixels each HAT could power. The original HAT lit up 75 of them, but trying for 76 tripped the overcurrent protection. That indicates that 2.5 amps of output power is the threshold.

Now how many pixels can we turn cyberpunk purple with the PoE+ HAT? Once I hit 250 pixels, the resistance of the strip became a major factor, and increasing the driven pixels wasn’t really increasing the load. The last pixels were a noticeably different color as a result. To continue the experiment, I switched over to testing at pure white, AKA the individual red, green, and blue LEDs turned to 100% brightness. In this configuration, I was able to drive 140 pixels. The PoE+ Hat reported a maximum current of 5.4 amps, while my PoE switch showed that port pulling 30.6 watts of power, at a respectable 87.9% efficiency. The hard limit I finally hit was 5.5 amps at the HAT, at which point the Pi power cycled.

After a few minutes of driving the PoE+ HAT way beyond its rated power output, I measured 56.8°C at the hottest point I could find. That is an impressive, tough little board. I wouldn’t be comfortable running at those levels for long, or unattended, but it’s nice to know that it does work, and no magic smoke was released. Based on what Eben had to say about the device, 25 watts of power seems like the maximum power number to aim for. Given that the Pi itself will take at least 2.5 watts, essentially at idle, that leaves 22.5 watts of power you can potentially use for something clever. And all this with just an Ethernet cable running to the Pi. So the question, is what can you do with 22.5 watts? LED lighting is the idea that was obvious to me, but I’m confident the Hackaday community will continue to surprise me in what you can come up with, so let us know what you want to do with the PoE+ HAT.

Force all clients to only use the set DNS server of PFSense.

The content below is taken from the original ( Force all clients to only use the set DNS server of PFSense.), to continue reading please visit the site. Remember to respect the Author & Copyright.

submitted by /u/No-Introduction6905 to r/PFSENSE
[link] [comments]

10 Gigabit Ethernet for the Pi

The content below is taken from the original ( 10 Gigabit Ethernet for the Pi), to continue reading please visit the site. Remember to respect the Author & Copyright.

When people like Bell and Marconi invented telephones and radios, you have to wonder who they talked to for testing. After all, they had the first device. [Jeff] had a similar problem. He got a 10 gigabit network card working with the Raspberry Pi Compute Module. But he didn’t have any other fast devices to talk to. Simple, right? Just get a router and another network card. [Jeff] thought so too, but as you can see in the video below, it wasn’t quite that easy.

Granted, some — but not all — of the hold-ups were self-inflicted. For example, doing some metalwork to get some gear put in a 19-inch rack. However, some of the problems were unavoidable, such as the router that has 10 Gbps ports, but not enough throughput to actually move traffic at that speed. Recabling was also a big task.

A lot of the work revolved around side issues such as fan noises and adding StarLink to the network that didn’t really contribute to the speed, but we understand distractions in a project.

The router wasn’t the only piece of gear that can’t handle the whole 10 Gbps data stream. The Pi itself has a single 4 Gbps PCI lane, so the most you could possibly get would be 4 Gbps and testing showed that the real limit is not quite 3.6 Gbps. That’s still impressive and the network card offloading helped the PI’s performance, as well.

On a side note, if you ever try to make videos yourself, watching the outtakes at the end of the video will probably make you feel better about your efforts. We all must have the same problems.

If you want to upgrade to 10Gb networking on the cheap, we have some advice. Just be careful not to scrimp on the cables.

Microsoft to apply Project Natick findings to ‘both land and sea datacentres’

The content below is taken from the original ( Microsoft to apply Project Natick findings to ‘both land and sea datacentres’), to continue reading please visit the site. Remember to respect the Author & Copyright.

Microsoft to apply Project Natick findings to 'both land and sea datacentres'

A year after retrieving an experimental underwater datacentre from the Scottish seabed, Microsoft says it will look to apply its findings to both land- and sea-based datacentre builds. 

Microsoft dropped the 40-foot container off the coast of Orkney in 2018 to help inform its datacentre sustainability strategy. 

Talking to CRN, Spencer Fowers, principal member of technical staff at Microsoft Research, said the software giant is still in the process of analysing the data from the 864 servers that sat inside the capsule, which was pulled from the sea last July.  

The servers in the Natick datacentre were found to have one-eighth the failure rate of those on land, he explained. 

“And as we’ve begun to examine that, we’ve found that some of the biggest contributors to that increase in reliability have been the nitrogen environment, and then also just the hands-off style – there’s nobody inside to jostle the components or bump and disconnect things,” Fowers said. 

“That’s really improved our reliability.  

“We’re taking those findings and looking at ways we can apply them to improving land-based and underwater datacentres in the future.” 

Race to zero

With datacentres reportedly on course to generate two per cent of global Co2 by 2025, datacentre sustainability is a hot topic for channel partners looking to offer their customers the most sustainable compute options possible. 

All eyes are on the hyperscalers to up their game, with Google recently giving customers visibility over the Co2 emissions of its datacentre regions. 

Microsoft kicked off Project Natick back in 2014 with a 90-day deployment off the coast of California to determine if underwater datacentres were viable. 

The world’s oceans offer “free access” to cooling – which is one of the biggest costs for land-based datacentres, Fowers pointed out.

“You also get the benefit of proximity to customers – over 50 per cent of the world’s population lives within 200km of the sea,” he said. 

The Orkney deployment represented phase two of the project, Fowers explained. 

“The goal of phase two was to determine whether we could build a manufacturable underwater datacentre in a 90-day decision to power-on timeframe,” he said.

The project feeds into Microsoft’s goal of becoming carbon negative by 2030.

“We’ve made these big announcements around sustainability, and project Natick is a great example of how we are trying to find practical solutions,” he said. 

Planning for Windows 11 Deployment? This guide will help you get started

The content below is taken from the original ( Planning for Windows 11 Deployment? This guide will help you get started), to continue reading please visit the site. Remember to respect the Author & Copyright.

Windows 11 DeploymentThere are many things that are new with Windows 11, and businesses at some point will be looking to deploy Windows 11. but many of these things haven’t been placed at the forefront of the information dump by Microsoft. We understand that the software giant wants to focus on the most eye-catching features, but we […]

This article Planning for Windows 11 Deployment? This guide will help you get started first appeared on TheWindowsClub.com.

Lenovo launches device as-a-service in the UK

The content below is taken from the original ( Lenovo launches device as-a-service in the UK), to continue reading please visit the site. Remember to respect the Author & Copyright.

Lenovo launches device as-a-service in the UK

Vendor launches three-tier DaaS model which spans entire device portfolio

Lenovo has launched its device as-a-service (DaaS) offering in the UK spanning its entire device portfolio.

Partners can now sell Lenovo’s entire device portfolio, spanning laptops, desktops, workstations and tablets, through a monthly as-a-service model.

All authorised Lenovo partners now have access to the DaaS offering, claims Jane Ashworth, director of SMB and channel for the UK and Ireland.

Lenovo’s DaaS offering is split into three tiers: Simplify, Accelerate and Transform accessible through the Lenovo Partner Hub or through Lenovo.com.

Its Simplify tier is intended as a “starting point” and is targeted at small businesses. Partners are able to use Lenovo’s online tool to add devices and services on behalf of the customer and calculate the total monthly cost of the service.

The Accelerate tier enables partners to add in their own services to the quote, which could include configuration or consulting services.

Ashworth described the upper “Transform” tier as a fully customisable end-to-end solution and fully managed service which would include deployment services, advanced IT automation and intelligent services.

“It means that for every size of customer, for every partner, and for every complex requirement from an end user, we have a version that will fit that request for the partner,” said Ashworth.

The channel boss said that the launch builds on Lenovo’s pre-existing DaaS solution which included only the “Transform” tier of the offering and was targeted at larger corporate customers and partners.

Lenovo piloted the DaaS offering in Q4 last year with a handful of UK partners: CDW, Computacenter, Softcat, Bechtle, XMA, Ultima and Centerprise.

Ashworth said the offering was tweaked based on feedback from partners.

 “We tested it with the people on the sales floor because we needed to ensure that it fitted their needs and it was easy to use.” We did a lot of user testing from a process point of view and adapted the tool itself,” she said.

“We also got a lot of feedback around the Accelerate tier because our partners wanted to add in some of their own tailored services, that they’ve developed themselves and that’s their USP. So that’s where the Accelerate tier came from to give partners the flexibility they required.”

She added: “We’ve tweaked the model based on their feedback. And now we’re in full roadmap mode. So really, really excited about it, it couldn’t have come at a more important time for the market,” Ashworth added.

The DaaS offering comes after a stellar financial year for Lenovo, with group revenues for the year ending 31 March 2021 exceeding $60bn for the first time – 20 per cent higher than the previous year.

Lenovo’s PCSD (PC and smart devices) business in the UK enjoyed record revenues, surging by 46 per cent in Q4 to $12.4bn, Ashworth said, while its pre-tax income was up by 58 per cent year on year.

Its rivals have all launched new as-a-service options this year. Dell launched its Apex offering during this year’s Dell Technologies World in May, while HPE added storage to its own GreenLake as-a-service offering.

Cisco meanwhile launched its Cisco Plus offering in April, which made its IT infrastructure, networking, security, compute, storage and applications products available through an as-a-service model.

Best Project Management apps for Microsoft Teams

The content below is taken from the original ( Best Project Management apps for Microsoft Teams), to continue reading please visit the site. Remember to respect the Author & Copyright.

Best project management apps for Microsoft TeamsMicrosoft Teams has become an indispensable work-from-home tool for many companies. If you want to spruce up the experience while managing numerous projects, you should try out these project management apps for Microsoft Teams. All these apps are free to available, and you can install them whenever possible. Before getting started with the list, you should […]

This article Best Project Management apps for Microsoft Teams first appeared on TheWindowsClub.com.

FOURtress reviewed as a RISC OS machine

The content below is taken from the original ( FOURtress reviewed as a RISC OS machine), to continue reading please visit the site. Remember to respect the Author & Copyright.

RISCOSbits has been establishing a strong reputation for producing stylish cases for the Raspberry Pi boards running RISC OS with generally silly names. I already have a PiHard at work, but I do have a small space at home. So I decided to check out the new FOURtress. So what is it like as a RISC OS machine?

We have already had a quick look at the FOURtress in a previous article. The FOURtress is an overclocked Pi4 in a very compact case (which still has room for an SD drive inside). It boots straight into RISC OS and comes with a nicely customised desktop on top of the RISC OS Developments 5.28 release.

If you are using the Linux software for a dual boot system, there is a Files partition already setup to share files. You will see it if you boot into Linux. There is a !linux application for booting into Linux (which we will cover in more detail in another article).

There is a lot of additional software installed on the machine above the RISC OS Direct release. On the system there are the free versions on !Organizer, !Fireworkz, emulators, tools, etc. There is also a directory called Free Links with links to lots of sites with software which you can download. RISCOSbits have been rummaging around the internet and collecting software so you do not need to.

There is also some RISCOSbits specific software on the system including a Fan control application for the built-in fan. I have it set on automatic but have not managed to push RISC OS to the point where the fan was needed. So the noise level of the machine ranges from silent to very quiet hum (if I am using Linux).

In use the machine feels very fast. I find it as quick as any other RISC OS machine I have (including my Titanium) and it runs Iris just as well. I am actually running RISC OS off the card not the drive (which would be even quicker) so that I can have the much more disc hungry linux on the drive.

Lastly, the machine comes with a handly A5 start guide which will answers all your questions on setting up and maintaining the machine. I have also found RISCOSbits super-responsive to any questions I ask.

If you want a larger machine which will take some Pi plugins (like the HAT for Wifi), there are better choices around. If you looking for a very compact, polished and fast RISC OS machine with lots of software, the FOURtress should definitely be on your list. It can also run Linux, and we will be looking at that option next time.

FOURtress website

No comments in forum

[How To] Fitbit integration

The content below is taken from the original ( [How To] Fitbit integration), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hi, I’ve written a small article that explains how to integrate Tasker with Fitbit, you can find it here.

I wanted to enable/disable some alarms on my Fitbit devices without having to do it manually.

I hope it helps 🙂

submitted by /u/pirasalbe to r/tasker
[link] [comments]

What is PowerShell splatting and how does it work?

The content below is taken from the original ( What is PowerShell splatting and how does it work?), to continue reading please visit the site. Remember to respect the Author & Copyright.

Hi all,

I recently posted an in-depth overview of the many uses of PowerShell splatting. Hopefully there is something useful in it for everyone. So, check it out at the link below and let me know what you think in the comments!

https://ryanjan.uk/2021/05/13/powershell-splatting/

Cheers!

submitted by /u/ryan-jan to r/PowerShell
[link] [comments]

Complete PC inside an old amplifier, with fully functional front

The content below is taken from the original ( Complete PC inside an old amplifier, with fully functional front), to continue reading please visit the site. Remember to respect the Author & Copyright.

Complete PC inside an old amplifier, with fully functional front submitted by /u/sabbathian to r/sffpc
[link] [comments]