Feeds

Matthew Garrett: Trying to remove the need to trust cloud providers

Planet Debian - Tue, 2022-12-13 16:19
First up: what I'm covering here is probably not relevant for most people. That's ok! Different situations have different threat models, and if what I'm talking about here doesn't feel like you have to worry about it, that's great! Your life is easier as a result. But I have worked in situations where we had to care about some of the scenarios I'm going to describe here, and the technologies I'm going to talk about here solve a bunch of these problems.

So. You run a typical VM in the cloud. Who has access to that VM? Well, firstly, anyone who has the ability to log into the host machine with administrative capabilities. With enough effort, perhaps also anyone who has physical access to the host machine. But the hypervisor also has the ability to inspect what's running inside a VM, so anyone with the ability to install a backdoor into the hypervisor could theoretically target you. And who's to say the cloud platform launched the correct image in the first place? The control plane could have introduced a backdoor into your image and run that instead. Or the javascript running in the web UI that you used to configure the instance could have selected a different image without telling you. Anyone with the ability to get a (cleverly obfuscated) backdoor introduced into quite a lot of code could achieve that. Obviously you'd hope that everyone working for a cloud provider is honest, and you'd also hope that their security policies are good and that all code is well reviewed before being committed. But when you have several thousand people working on various components of a cloud platform, there's always the potential for something to slip up.

Let's imagine a large enterprise with a whole bunch of laptops used by developers. If someone has the ability to push a new package to one of those laptops, they're in a good position to obtain credentials belonging to the user of that laptop. That means anyone with that ability effectively has the ability to obtain arbitrary other privileges - they just need to target someone with the privilege they want. You can largely mitigate this by ensuring that the group of people able to do this is as small as possible, and put technical barriers in place to prevent them from pushing new packages unilaterally.

Now imagine this in the cloud scenario. Anyone able to interfere with the control plane (either directly or by getting code accepted that alters its behaviour) is in a position to obtain credentials belonging to anyone running in that cloud. That's probably a much larger set of people than have the ability to push stuff to laptops, but they have much the same level of power. You'll obviously have a whole bunch of processes and policies and oversights to make it difficult for a compromised user to do such a thing, but if you're a high enough profile target it's a plausible scenario.

How can we avoid this? The easiest way is to take the people able to interfere with the control plane out of the loop. The hypervisor knows what it booted, and if there's a mechanism for the VM to pass that information to a user in a trusted way, you'll be able to detect the control plane handing over the wrong image. This can be achieved using trusted boot. The hypervisor-provided firmware performs a "measurement" (basically a cryptographic hash of some data) of what it's booting, storing that information in a virtualised TPM. This TPM can later provide a signed copy of the measurements on demand. A remote system can look at these measurements and determine whether the system is trustworthy - if a modified image had been provided, the measurements would be different. As long as the hypervisor is trustworthy, it doesn't matter whether or not the control plane is - you can detect whether you were given the correct OS image, and you can build your trust on top of that.

(Of course, this depends on you being able to verify the key used to sign those measurements. On real hardware the TPM has a certificate that chains back to the manufacturer and uniquely identifies the TPM. On cloud platforms you typically have to retrieve the public key via the metadata channel, which means you're trusting the control plane to give you information about the hypervisor in order to verify what the control plane gave to the hypervisor. This is suboptimal, even though realistically the number of moving parts in that part of the control plane is much smaller than the number involved in provisioning the instance in the first place, so an attacker managing to compromise both is less realistic. Still, AWS doesn't even give you that, which does make it all rather more complicated)

Ok, so we can (largely) decouple our trust in the VM from having to trust the control plane. But we're still relying on the hypervisor to provide those attestations. What if the hypervisor isn't trustworthy? This sounds somewhat ridiculous (if you can't run a trusted app on top of an untrusted OS, how can you run a trusted OS on top of an untrusted hypervisor?), but AMD actually have a solution for that. SEV ("Secure Encrypted Virtualisation") is a technology where (handwavily) an encryption key is generated when a new VM is created, and the memory belonging to that VM is encrypted with that key. The hypervisor has no access to that encryption key, and any access to memory initiated by the hypervisor will only see the encrypted content. This means that nobody with the ability to tamper with the hypervisor can see what's going on inside the OS (and also means that nobody with physical access can either, so that's another threat dealt with).

But how do we know that the hypervisor set this up, and how do we know that the correct image was booted? SEV has support for a "Launch attestation", a CPU generated signed statement that it booted the current VM with SEV enabled. But it goes further than that! The attestation includes a measurement of what was booted, which means we don't need to trust the hypervisor at all - the CPU itself will tell us what image we were given. Perfect.

Except, well. There's a few problems. AWS just doesn't have any VMs that implement SEV yet (there are bare metal instances that do, but obviously you're building your own infrastructure to make that work). Google only seem to provide the launch measurement via the logging service - and they only include the parsed out data, not the original measurement. So, we still have to trust (a subset of) the control plane. Azure provides it via a separate attestation service, but again it doesn't seem to provide the raw attestation and so you're still trusting the attestation service. For the newest generation of SEV, SEV-SNP, this is less of a big deal because the guest can provide its own attestation. But Google doesn't offer SEV-SNP hardware yet, and the driver you need for this only shipped in Linux 5.19 and Azure's SEV Ubuntu images only offer up to 5.15 at the moment, so making use of that means you're putting your own image together at the moment.

And there's one other kind of major problem. A normal VM image provides a bootloader and a kernel and a filesystem. That bootloader needs to run on something. That "something" is typically hypervisor-provided "firmware" - for instance, OVMF. This probably has some level of cloud vendor patching, and they probably don't ship the source for it. You're just having to trust that the firmware is trustworthy, and we're talking about trying to avoid placing trust in the cloud provider. Azure has a private beta allowing users to upload images that include their own firmware, meaning that all the code you trust (outside the CPU itself) can be provided by the user, and once that's GA it ought to be possible to boot Azure VMs without having to trust any Microsoft-provided code.

Well, mostly. As AMD admit, SEV isn't guaranteed to be resistant to certain microarchitectural attacks. This is still much more restrictive than the status quo where the hypervisor could just read arbitrary content out of the VM whenever it wanted to, but it's still not ideal. Which, to be fair, is where we are with CPUs in general.

(Thanks to Leonard Cohnen who gave me a bunch of excellent pointers on this stuff while I was digging through it yesterday)

comments
Categories: FLOSS Project Planets

Nonprofit Drupal posts: December Drupal for Nonprofits Holiday Hour

Planet Drupal - Tue, 2022-12-13 16:05

Join us Thursday, December 15 at 1pm ET / 10am PT, for a special edition of our monthly call. (Convert to your local time zone.)

Our usual informal get-together will be even more so, as we gather to celebrate the season with friends old and new.  We may end up talking shop -- since that's what happens when you get any two Drupalists in a room together -- but no specific topics are on the agenda this month.  (That said, if you've got something specific on your mind, feel free to share ahead of time in our collaborative Google doc: https://nten.org/drupal/notes!)

All nonprofit Drupal devs and users, regardless of experience level, are always welcome on this call.

This free call is sponsored by NTEN.org and open to everyone. 

  • Join the call: https://us02web.zoom.us/j/81817469653

    • Meeting ID: 818 1746 9653
      Passcode: 551681

    • One tap mobile:
      +16699006833,,81817469653# US (San Jose)
      +13462487799,,81817469653# US (Houston)

    • Dial by your location:
      +1 669 900 6833 US (San Jose)
      +1 346 248 7799 US (Houston)
      +1 253 215 8782 US (Tacoma)
      +1 929 205 6099 US (New York)
      +1 301 715 8592 US (Washington DC)
      +1 312 626 6799 US (Chicago)

    • Find your local number: https://us02web.zoom.us/u/kpV1o65N

  • Follow along on Google Docs: https://nten.org/drupal/notes

View notes of previous months' calls.

Categories: FLOSS Project Planets

PyCoder’s Weekly: Issue #555 (Dec. 13, 2022)

Planet Python - Tue, 2022-12-13 14:30

#555 – DECEMBER 13, 2022
View in Browser »

Package Python Code With pyproject.toml & Listing Files With pathlib

How do you start packaging your code with pyproject.toml? Would you like to join a conversation that gently walks you through setting up your Python projects to share? This week on the show, Christopher Trudeau is here, bringing another batch of PyCoder’s Weekly articles and projects.
REAL PYTHON podcast

Who Controls Parallelism? A Disagreement Causing Slowdowns

In complex systems there may be a fight between the parallelism in your code vs the parallelism in the libraries you’re using. This fight can cause things to slow down. This article shows some examples and what you can do about it.
ITAMAR TURNER-TRAURING

Time Series Forecasting Methods

Time series data runs almost every technology. With this massive amount of data, developers can now better infer what has happened to their data in the past and attempt to predict future values. Read the article to get an overview of time series forecasting methods, and how to validate models →
INFLUXDATA sponsor

Make a Mastodon Bot on AWS Free Tier

This article walks you through everything you need to know to get a Mastodon bot set up in on the AWS Free tier through DynamoDB and AWS Lambdas.
MAT DUGGAN

PyPy v7.3.10 Release

PYPY.ORG

Django Bugfix Release: 4.1.4

DJANGO SOFTWARE FOUNDATION

Python 3.11.1, 3.10.9, 3.9.16, 3.8.16, 3.7.16 Released

CPYTHON DEV BLOG

XtremePython 2022 Online Conference December 27th

XTREMEPYTHON.DEV • Shared by Haim Michael

Discussions Software Architects: What’s Your Typical Day Look Like?

HACKER NEWS

What Style of import Statement Do You Use?

TWITTER.COM/BBELDERBOS • Shared by Bob Belderbos

Python Jobs Software Engineer - Weissman Lab (Cambridge, MA, USA)

Whitehead Institute for Biomedical Research

More Python Jobs >>>

Articles & Tutorials I/O Is No Longer the Bottleneck

A common interview question Ben asks candidates is to write a program that counts the frequency of words in a file, as a follow-up question he asks where the bottleneck is in the code. The most common answer, I/O, is not necessarily true on modern hardware. Read on to see the comparisons between Python and GO and where the program actually spends its time.
BEN HOYT

Django Settings Patterns to Avoid

The settings module is key to getting your Django project up and running, storing the info your project needs to run. As with all code, there are both good and bad habits. This article details some of the patterns you should avoid.
ADAM JOHNSON

Find Your Next Tech Job Through Hired

Hired has 1000s of companies, from startups to Fortune 500s, who are hiring developers, data scientists, mobile engineers, and more. Create a profile with your skills and preferences for hiring managers to reach you directly. Sign up today!
HIRED sponsor

Simplicity Is an Advantage but Sadly Complexity Sells Better

This opinion piece from Eugene Yan discusses why complexity is often touted over simplicity: the effort is more obvious and therefore must be superior. This is a trap in thinking. Eugene makes the tougher argument for simplicity.
EUGENE YAN

Python Basics: Dictionaries

One of the most useful data structures in Python is the dictionary. In this video course, you’ll learn what a dictionary is, how dictionaries differ from lists and tuples, and how to define and use dictionaries in your own code.
REAL PYTHON course

Make Beautiful QR Codes in Python

QR codes don’t have to look ‘industrial’ and they’re trivially easy to create in Python. This article focuses on personal, social, and human applications for the trusty old QR code.
PETE FISON • Shared by Pete Fison

Getting Started With PyTorch 2.0

PyTorch has released a new Getting Started guide with all the info you need to begin your PyTorch 2.0 journey.
PYTORCH.ORG

Projects & Code git-bug: Distributed, Offline-First Bug Tracker Embedded in Git

GITHUB.COM/MICHAELMURE

kangas: Explore Multimedia Datasets at Scale

GITHUB.COM/COMET-ML

whitebox: E2E ML Monitoring Platform

GITHUB.COM/SQUAREDEV-IO

takahe: An ActivityPub/Fediverse Server

GITHUB.COM/JOINTAKAHE

NansAreNumbers: An Esoteric Data Type Built on NaNs

GITHUB.COM/THOPPE

Events Python North East

December 14, 2022
PYTHONNORTHEAST.COM

Weekly Real Python Office Hours Q&A (Virtual)

December 14, 2022
REALPYTHON.COM

PyData Bristol Meetup

December 15, 2022
MEETUP.COM

PyLadies Dublin

December 15, 2022
PYLADIES.COM

Python Pizza Holguín

December 17 to December 18, 2022
PYTHON.PIZZA

Happy Pythoning!
This was PyCoder’s Weekly Issue #555.
View in Browser »

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

Categories: FLOSS Project Planets

Amin Bandali: Why I love participating in LibrePlanet

GNU Planet! - Tue, 2022-12-13 13:30

Also published on the Free Software Foundation's community blog:
Amin Bandali: Why it's fun to participate in LibrePlanet

I'm Amin Bandali, a free/libre software activist by passion, and a software developer/engineer and computing scientist by profession. I am a former intern and current volunteer with the Free Software Foundation (FSF), and a member of the GNU Project. One of the ways I volunteer with the FSF is through LibrePlanet. I've helped with various aspects of the conference's organization, currently mainly helping as a member of the LibrePlanet committee, which reviews all session proposals. In this blog post I'd like to give a quick background on how and why I got involved with LibrePlanet and how I contribute to it today. I will also share how you, too, could start helping with the organization of the conference in a number of different ways, if you're interested!

I first got involved with LibrePlanet as a volunteer a few years back. By that point, I'd enjoyed participating in the conference via IRC and watching the talks online for a few years, and I was looking for ways to get involved. As I couldn't make it to Boston to attend LibrePlanet in person, I volunteered online, with tasks such as helping watch over the conference IRC channels and answering questions as best as I could. I seemed to have done a decent job, since the FSF folks later asked if I could do the same for a few non-LibrePlanet online FSF events too, which I gladly accepted.

Having enjoyed both participating and volunteering for LibrePlanet, I thought it would be great if I could give a talk of my own, too. This only became possible for me after 2020 with the possibility of doing remote presentations. Since I sadly cannot attend the event in person currently, this was a welcome side-effect of the conference temporarily switching to an online-only format. So, I submitted a proposal to talk about "Jami and how it empowers users" for LibrePlanet 2021, which was accepted and became my first LibrePlanet talk. Though presenting, or even just submitting a talk at a large conference like LibrePlanet, may sometimes seem like an intimidating task, I had a great time presenting mine, thanks in no small part to the FSF staff and other volunteer organizers, as well as the audience members.

The FSF staff were supportive and encouraging throughout the entire process of preparing and presenting my talk, and the audience gave positive and/or constructive feedback after my presentation. Plus, I greatly enjoyed discussing various free software topics with them, which was not really surprising because the folks attending LibrePlanet tend to be free software enthusiasts or activists like myself who are often just as eager to watch and chat with others about free software. And, as my good GNU friend Jason Self puts it, LibrePlanet is a wonderful place for such enthusiasts to "recharge their free software batteries each year".

Back in 2020, I was invited to join the LibrePlanet committee, a diverse team of volunteers from different backgrounds and areas of expertise that review all sessions submitted, helping select session proposals in a way that provides an exciting lineup of talks for people of differing areas and levels of experience and interest. I humbly and happily accepted the invitation to join the committee, and I help with the reviews to date. (I of course don't review my own session proposals, nor the ones I recognize to be from people I know). If you are also interested in joining the LibrePlanet committee and helping review the wonderful session proposals the team receives for each conference, you can come by the #libreplanet or #fsf channels on the Libera.Chat IRC network and reach out to the FSF staff there, or send an email to campaigns@fsf.org.

Besides being part of the LibrePlanet committee and helping review session proposals, there are a number of other ways to contribute to the organization of the conference as well. Technical tasks include helping with the setup and/or the maintenance of some pieces of infrastructure for the conference, for example helping maintain the conference's self-hosted installation of LibreAdventure, which is the conference's online event space where people can have their avatars "bump" into each other to have a real-time videoconferencing chat, and they can explore sessions, the FSF office (digitized), virtual sponsor booths, and more. Non-technical tasks include helping with the moderation of the conference's IRC channels on the event days, and volunteering to introduce, caption, or transcribe talks. There are also other logistical tasks that need doing now that LibrePlanet is switching to a hybrid format with both online and in-person events (in Boston). If you are interested in getting involved and helping with any of these (or other) tasks, please email to resources@fsf.org.

The theme for LibrePlanet 2023 is "Charting the Course", which I find particularly apt and important. The free software movement has come a long way and thanks to the tireless efforts of people from projects and communities of varying sizes, today we can carry out a very wide range of computing tasks in total freedom. It is also crucially important to continue recognizing and making progress in the areas of digital life where avoiding nonfree software may not be currently possible or feasible. One such notorious area is online payments, where the GNU Taler folks have been hard at work making freedom-respecting, privacy-friendly online transactions possible. At LibrePlanet 2023, I hope to see talks on such areas of digital life. I look forward to talks presenting the state of available free software in a certain field and clarify to what extent we can participate in them in freedom, along with a wishlist for improvements and a roadmap for moving closer towards freedom in this specific field so that we will ultimately, hopefully, reach full digital freedom.

These, along with other factors — such as the FSF staff striving for LibrePlanet to be inclusive and accessible, as well as making it possible to participate online for those of us not able to attend the event in person — make LibrePlanet a free software event I'm most excited about and look forward to each year. I hope and expect that LibrePlanet 2023 will be a conference with a lineup of interesting, fun, educational, and thought-provoking user freedom themed talks and sessions, along with a chance to catch up and socialize with fellow free software hackers, activists, and/or enthusiasts from all over the world, just like it always has been — especially this time with its ever more relevant theme of "Charting the Course" to not only reflect and celebrate the path we've come so far, but to also look towards the future and chart the course to software user freedom for coming generations.

Take care, and I hope to see you around for LibrePlanet 2023!

Amin Bandali
LibrePlanet Committee Member and assistant GNUisance

Categories: FLOSS Project Planets

FSF Blogs: Amin Bandali: Why it's fun to participate in LibrePlanet

GNU Planet! - Tue, 2022-12-13 13:09
LibrePlanet Committee Member and assistant GUIisance shares why it's fun and rewarding to participate in the annual LibrePlanet conference.
Categories: FLOSS Project Planets

Amin Bandali: Why it's fun to participate in LibrePlanet

FSF Blogs - Tue, 2022-12-13 13:09
LibrePlanet Committee Member and assistant GUIisance shares why it's fun and rewarding to participate in the annual LibrePlanet conference.
Categories: FLOSS Project Planets

Mike Herchel's Blog: Hark! The Online Drupal 10 Launch Party is Tomorrow! 🎉

Planet Drupal - Tue, 2022-12-13 09:54
Hark! The Online Drupal 10 Launch Party is Tomorrow! 🎉 mherchel Tue, 12/13/2022 - 10:00
Categories: FLOSS Project Planets

Real Python: Context Managers and Python's with Statement

Planet Python - Tue, 2022-12-13 09:00

The with statement in Python is a quite useful tool for properly managing external resources in your programs. It allows you to take advantage of existing context managers to automatically handle the setup and teardown phases whenever you’re dealing with external resources or with operations that require those phases.

What’s a context manager? It’s a block of code that has side effects upon entering and exiting. The context management protocol allows you to create your own context managers so you can customize the way you deal with system resources.

In this video course, you’ll learn:

  • How context managers work
  • Some common context managers in the Python standard library
  • How to write a custom context manager

With this knowledge, you’ll write more expressive code and avoid resource leaks in your programs.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Linux App Summit 2023 will be held in Brno

Planet KDE - Tue, 2022-12-13 08:19

We’re happy to announce that Linux App Summit 2023 will take place in Brno, Czech Republic on April 21–23, 2023. For 2023 Linux App Summit (LAS) will again be held as a hybrid event, allowing attendees and speakers to join virtually or in person at our venue in Brno. Linux App Summit (LAS) is a conference focused on building a Linux application ecosystem. LAS aims to encourage the creation of quality applications, seek opportunities for compensation for FOSS developers, and foster a thriving market for the Linux operating system. Everyone is invited to attend! Companies, journalists, and individuals who are interested in learning more about the Linux desktop application space and growing their user base are especially welcome. The call for papers and registration will be open soon. Please check linuxappsummit.org for more updates in the upcoming weeks.

About Brno

Brno, the second-largest city in the Czech Republic, is a technological hub in Central Europe and the wider region. Several universities specializing in Information Technology give Brno a large source of IT talent and as a result, many companies have opened research and development facilities in the city. With around 90,000 students residing in the area, Brno is a vibrant university city and home to many museums, theatres, festivals, and cultural events. It is a member of the UNESCO Creative Cities Network and in 2017 was designated as a "City of Music". Alongside the urban areas, visitors will find traditional Moravian folklore preserved in some districts and can experience traditional Moravian costumes, wines, folk music, and dance.
View of Brno. SchiDD, CC BY-SA 4.0, via Wikimedia Commons.

There are lots of sights to see in Brno! Some of the most popular attractions are:

  • Špilberk Castle
  • Cathedral of St. Peter and Paul
  • Veveří Castle
  • Villa Tugendhat

We hope to see you in Brno!

About the Linux App Summit

The Linux App Summit is co-organized by GNOME and KDE. It brings the global Linux community together to learn, collaborate, and help grow the Linux application ecosystem. Through talks, panels, and Q&A sessions, we encourage attendees to share ideas, make connections, and join our goal of building a common app ecosystem. Previous iterations of the Linux App Summit have been held in the United States in Portland, Oregon, and Denver, Colorado, as well as in Barcelona, Spain, and Rovereto, Italy.
Attendees at LAS 2022.

Learn more by visiting linuxappsummit.org.

Categories: FLOSS Project Planets

C/C++ Profiling Tools

Planet KDE - Tue, 2022-12-13 05:00

This blog will give you a brief overview of profiling C and C++ applications. Additionally, it will lay before you all of the tools available, with the purpose of aiding you in choosing the right tools at the right times.

 

The Steps for Profiling

Before we look at the actual tools, let’s go over the steps to profiling. It’s quite important to have a technique for doing this properly, to avoid the trap of changing something hoping it’s better, committing it, and going home without making sure that you’ve actually improved things. So the way to do that is by, first, assessing what is important in terms of performance in your project. Is it the CPU usage? Is it the off CPU time, when your application is sleeping or waiting for something to happen? Is it memory allocations? Is it the battery usage that is the problem? Do you want to improve the frame rate? It can be many, many different things, not just one. It’s a whole set of measures.

  1. First, you’ll want to assess what is important for your project. What do you want to measure?
  2. Then, you’ll decide which tools you can use to do these measurements. There are many different tools and they all cover a number of possible things to measure.
  3. After you select your tools, you should write a benchmark. That means a reliable way of measuring something in your application. If you have to start the application, click here, click there, load a file, wait for something to be downloaded and so on and so on, you won’t be able to do so three times in a row and get the same measurements. It’s much better if you write some sort of automated test, like a unit test but for benchmarking instead of correctness, which means measuring the performance of a specific task. You decide what to measure, how to measure, and write a benchmark to actually be able to measure that reliably.
  4. Use one of the tools to run the benchmark and establish a baseline by measuring the benchmark, before you make any changes to the code.
  5. At step five, you can finally change the code. Keep in mind that you do not change the code until step 5. After you change it, measure the result, compare it to the baseline, and decide whether or not the change is an improvement. That can be a bit tricky if it’s better on one measure than it is on another. You need to decide if that’s good enough or if you should refine the solution.

Then, you would measure it again with the changes, repeat up to step 4, profile, repeat step 5, make the changes again, and so on.

 

The Tools to Use

In order to choose the best tools for the job, it’s important to be aware of all the tools that are available and when to use them.

For Measuring Performance

Let’s talk first about measurements of performance, specifically, CPU and off CPU performance.

VTune

To measure performance, you can use VTune, which is made by Intel. It’s a very powerful tool for this with a very nice user interface. VTune is available for both Linux and Windows and is free if you download it as part of Intel System Studio. Don’t look for it separately as VTune, but as part of the Intel System Studio suite of tools. It’s actually free for use — even for commercial use. VTune does, however, have one limitation — it requires Intel hardware, which means you can’t use it on AMD CPUs or ARM, if you have embedded boards. Apart from that one limitation, it’s a very good tool.

Perf

Another tool you can use to measure performance is perf, which is part of the Linux kernel. That means it supports all of the architectures of the Linux kernel, including x86, ARM, PPC, and so on. Unfortunately, perf has no user interface. It’s a command line tool that is pretty difficult to use. So, we at KDAB wrote a tool called Hotspot, which is a graphical interface for the measurements made by perf.

You can find Hotspot on GitHub. It’s an open source application that you can use for free. Its goal is to be easy to use and it covers most of the common use cases, including watching the CPU time used by the application and finding out who is using that time. It also supports measuring off CPU time, meaning the time when the application is sleeping or waiting for something to happen. Click here to watch a full demo of Hotspot.

For Measuring Memory Allocations

Another thing you might want to measure, as previously mentioned, is memory allocations — not just memory leaks but also the use of memory while the application is running.

Valgrind Massif

If your application is cleaning up everything on the end, heavy memory usage will not show up as a leak. So, the leak-checking tools will nothelp. But if your application is using too much memory while it’s running, you might want to use different tools that can pinpoint where those allocations happen. One tool that does that is Valgrind Massif. It does the job quite well, but is really slow.

Heaptrack

Another approach is to use Heaptrack, an open source tool that’s part of KDE. It was developed by one of my colleagues, Milian Wolff. Heaptrack is able to locate and recall all of the memory allocations done while the application is running. Then it will show you graphs of the allocations, including temporary allocations, which is when an area of memory is allocated and then freed right away afterwards.

This could be wanted, of course, but can also be something to optimize. It can show you, for different memory sizes, whether you often allocate small memory sizes or large ones. And of course if shows you a backtrace for every allocation, so you can relate the application’s software memory to your code to help you find out which piece of code is doing it. It is a lot faster than Valgrind Massif. You almost don’t see that during Heaptrack.

Another feature that Heaptrack has over Valgrind is that you can attach Heaptrack to a running program. This is quite useful if you want to measure only one operation, as opposed to the whole setup of the application. It can also show you the difference between tool runs, which is extremely useful if you apply the process that was mentioned earlier. You can measure the baseline, measure with the changes, and Heaptrack will show you only the difference between the two so you can figure out by how much you’ve improved things when making changes.

For a full demo of Heaptrack, watch this video.

 

Need More Details?

For more details about how to use all of these tools, check out the Profiling and Debugging videos on our YouTube channel. If you would like to learn even more about these tools than what’s in the videos, we at KDAB also offer a 3 day Profiling and Debugging C and C++ Applications training.

The post C/C++ Profiling Tools appeared first on KDAB.

Categories: FLOSS Project Planets

Python Bytes: #314 What are you, a wise guy? Sort it out!

Planet Python - Tue, 2022-12-13 03:00
<a href='https://www.youtube.com/watch?v=P0KeSWtB4Sc' style='font-weight: bold;'>Watch on YouTube</a><br> <br> <p><strong>About the show</strong></p> <p>Sponsored by <a href="http://pythonbytes.fm/foundershub2022"><strong>Microsoft for Startups Founders Hub</strong></a>.</p> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy"><strong>@mkennedy@fosstodon.org</strong></a></li> <li>Brian: <a href="https://fosstodon.org/@brianokken"><strong>@brianokken@fosstodon.org</strong></a></li> </ul> <p><strong>Brian #1:</strong> <a href="https://github.com/willmcgugan/faqtory"><strong>FAQtory</strong></a></p> <ul> <li>Will McGugan</li> <li>“FAQtory is a tool to auto-generate a <a href="https://github.com/willmcgugan/faqtory/blob/main/FAQ.md">FAQ.md</a> (Frequently Asked Questions) document for your project.</li> <li>FAQtory has a FAQ.md written by itself, so you can see an example of the project in the project.</li> <li>Builds a markdown FAQ.md that includes questions at the top that link to answers below.</li> <li>“Additionally, a ‘suggest’ feature uses fuzzy matching to reply to GitHub issues with suggestions from your FAQ.” <ul> <li>I haven’t tried this part, but looking forward to it. </li> <li>May help to answer GH issues that are really questions.</li> </ul></li> </ul> <p><strong>Michael #2:</strong> <a href="https://mkennedy.codes/posts/paying-for-search-in-2022-am-i-crazy/"><strong>Kagi search "live with it” report</strong></a></p> <ul> <li>Still enjoying it a lot</li> <li>Very fast</li> <li>LOVE blocking SEO-heavy, content-light sites</li> <li>Maps are rough around the edges</li> <li>Not obvious how to set as a private/incognito search engine (but can be done in settings)</li> <li>They have browser extensions - but I don't want to install extensions <ul> <li>I only use 1password &amp; zoom</li> </ul></li> <li>It could use some documentation however (e.g. supports !’s, but what are they?)</li> <li>Being tempted by Orion too, but sticking with Vivaldi.</li> </ul> <p><strong>Brian #3:</strong> <a href="https://lukeplant.me.uk/blog/posts/tools-for-rewriting-python-code/"><strong>Tools for rewriting Python code</strong></a></p> <ul> <li>Luke Plant</li> <li>A collection of tools change your code (hopefully for the better)</li> <li>Several categories <ul> <li>formatting and coding style - black, isort, …</li> <li>upgrades - pyupgrade, flynt, … <ul> <li>we need one to convert from setup.py/setup.cfg to pyproject.toml</li> </ul></li> <li>type hints - auto type hints? cool. maybe. <ul> <li>I haven’t tried any of these, but they look interesting</li> </ul></li> <li>refactoring, editors, rope, jedi </li> <li>other - autoflake, shed, …</li> <li>write your own, with <a href="https://libcst.readthedocs.io/en/latest/index.html">LibCST</a></li> </ul></li> </ul> <p><strong>Michael #4:</strong> <a href="https://github.com/cirospaciari/socketify.py"><strong>Socketify</strong></a></p> <ul> <li>Bringing WebSockets, Http/Https High Performance servers for PyPy3 and Python3</li> <li><a href="https://fosstodon.org/@cirospaciari/109478126464572966"><strong>A new record</strong></a> for Python no other Web Framework as able to reach 6.2 mi requests per second before in @TFBenchmarks 🥇 🏆 </li> <li>This puts Python in the same ballpark than #golang, #Rust and #C++.</li> </ul> <p><strong>Extras</strong> </p> <p>Brian:</p> <ul> <li>watching <a href="https://github.com/brettcannon/mousebender">mousebender</a> from Brett Cannon <ul> <li>BTW, releases watching is cool. Probably a decent reason to use GH releases feature.</li> </ul></li> <li>Python Developer’s Guide has a visual of the <a href="https://devguide.python.org/versions/">Python Versions</a> and release cycle. <ul> <li>Shows the stages of releases: end-of-life, security, bugfix, feature</li> <li>Next end-of-life is Python 3.7 on 27-June-2023</li> <li><a href="https://devguide.python.org/versions/#status-key">Great descriptions of what all these terms mean</a> at the bottom</li> </ul></li> </ul> <p>Michael:</p> <ul> <li>Michael’s latest post: <a href="https://mkennedy.codes/posts/sometimes-you-should-build-it-yourself/">Sometimes, You Should Build It Yourself</a> </li> <li>Trying <a href="https://proton.me">all in on proton</a> for personal stuff</li> <li><a href="https://bunny.net/blog/bringing-privacy-back-into-your-own-hands-introducing-bunny-fonts/">Bunny fonts</a></li> <li><a href="https://fosstodon.org/@brianarbuckle/109496585374260058">AI Stand up Comedy</a></li> </ul> <p><strong>Joke:</strong> <a href="https://twitter.com/goodside/status/1598129631609380864"><strong>Wise guy, eh?</strong></a></p>
Categories: FLOSS Project Planets

Opensource.com: Drupal 10 is worth a fresh look

Planet Drupal - Tue, 2022-12-13 03:00
Drupal 10 is worth a fresh look Martin Anderso… Tue, 12/13/2022 - 03:00

Drupal 10 is chockful of useful features, a fresh look, and a brand-new editor.

The popular Drupal open source content management system (CMS) reaches a significant milestone when version 10 is released on December 14. Personally, I think Drupal X sounds…

Categories: FLOSS Project Planets

Evolving Web: Drupal 10 has Arrived. Here’s What to Expect.

Planet Drupal - Tue, 2022-12-13 02:54

The long-awaited Drupal 10 will be released on December 14. Are you ready? While some of us get excited about software updates, we get that some might be overwhelmed by what this new version will change for their sites. 

Here we’ll go over how you can be prepared for the latest version, the benefits of moving and the next steps to get there sooner rather than later.

And don’t worry: if you still need guidance by the end of this post, you can watch our recent Drupal 10 webinar, or request custom training for your team. 

Why Upgrade? Can’t I Just Keep Using an Earlier Version?

Drupal 7 and 9 will become obsolete as of November 2023, and having an outdated version of Drupal or any unsecured CMS means your website is more prone to downtimes and bugs. As an organization, implementing and maintaining security measures on your own can be very expensive. Coupled with the custom infrastructure configurations needed to fix bugs, it will invariably cost more than proactively moving to the latest version of Drupal.

Apart from Drupal 7 and 9 reaching end-of-life, Drupal 10 promises to provide an overall improved Drupal experience for content editors, developers and site owners. 

Drupal 10 makes it easy to create content through CKEditor 5, has a greater standardization for using Drupal as a headless CMS, and future features such as automatic updates will make your platform easy to maintain. Overall, this upgrade will provide a better platform for brands looking to create engaging digital experiences. 

Content Creation Front and Centre with CKEditor 5

D10 continues Drupal’s trend of prioritizing content creators and editors for better front-end development. Central to this trend is CKEditor 5, which was an experimental module introduced in Drupal 9.3 and is the sole WYSIWYG editor in Drupal 10.

This new version provides a significantly improved content authoring experience with in-place controls for object editing. It enables you to easily manage media and tables using advanced features. These include various out-of-the-box core features, including basic formatting and styling as well as advanced productivity features.

Shiny New Themes

Content creators will appreciate Claro, which replaces Seven as the new default admin. Meanwhile, Olivero – replacing Bartik as the new default - makes for a more usable front-end, putting accessibility best practices at the forefront. 

Claro is notably less cluttered with more “white space” right out of the box, making it easier on the eyes, less complex to learn and allows for greater accessibility overall. Olivero, first introduced in Drupal 9.4, is now the default front-end theme for Drupal 10. Featuring a simple and modern design, Olivero is focused on the needs of content creators. It is also WCAG AA compliant right from the start, with an accessible and beautiful interface that features a high-contrast colour palette that’s easier on the eyes.

New Dependencies

On the developer side, Drupal 10 has some noteworthy upgrades, including a new underlying technology stack – Symfony 6.2 – and a new version of PHP. 

PHP 8.1, in addition to being a requirement for Symfony 6.2, promises a longer support lifetime for Drupal 10 as well as more stability and predictability in its dependency requirements.

Additionally, Drupal 10 will use modern JavaScript components to replace some uses of jQuery UI and jQuery, as well as a new version of Twig (3.x), which promises to be a faster, more secure and more flexible PHP template engine. Plus, Drupal 10 will no longer support Internet Explorer 11.

Ready-to-Go Headless

Drupal has long been a dependable CMS for hybrid or fully headless configurations thanks to its support for REST, JSON, and GraphQL APIs, (read more about Headless). Drupal’s Decoupled Menus Initiative has sought to make headless design even easier by improving how JavaScript front ends consume configurable menus managed in Drupal.

Beyond Drupal 10.0, the platform’s headless capacity will be expanded further by adding read-only menus for Drupal HTTP APIs. This will make it easier for front-end developers to consume menu data to build navigation systems.

Site Builders Will Soon Have a Project Browser

Another exciting addition that will come in the next versions of Drupal 10 is Project Browsing, which meets the ultimate goal of taking the mystery out of starting and building a new project in Drupal. This handy feature makes it easier for users – especially novice site builders – to hunt down the perfect modules for their projects. It will have a visual browsing interface within the Drupal admin with a more intuitive filtering system and iconography to convey key quality measures faster. 

Three Cheers for Automatic Updates!

Automatic updates have long been one of the most requested features for inclusion in Drupal. The Automatic Updates Initiative has been one of Drupal’s key strategic initiatives for quite some time, and while not yet available in Drupal 10, is expected in version 10.1 or 10.2. With Drupal releasing new features every 6 months or so, it’s only a matter of time before it’s included. 

The upcoming automatic update module will rely on the following three components:

  • Public safety alerts regarding critical and highly critical updates from Drupal.org
  • Readiness checks, which trigger warnings of issues blocking your website from receiving automatic updates
  • In-place updates, which download the update from Drupal.org, check it and create backups of the files, perform the update and restore your backup files if anything goes wrong

The Automatic Updates module is yet another feature aimed at making life easier for content creators, particularly those who don’t have a development background who are tasked with managing Drupal websites and who lack a routine for checking and running Drupal updates upon release.

Need Help With Upgrading?

If you are already using Drupal 9, the upgrade to 10 will be smooth as the two share the same architectures. If you are still using an older version, you are looking at a migration of your site to Drupal 10, which involves replicating applications from the old product onto the new one.

As easy as it sounds to upgrade or adopt Drupal, sometimes you just need a little support. Follow along with Drupal experts as we do a deep dive and reveal some sneak peeks for Drupal 10 in our recent webinar or reach out to request custom training. We offer custom training for teams like yours, from sales and marketing to software development. 

Regardless of which version of Drupal you’re currently using, or even if you’re thinking of moving to Drupal from a different CMS, you can hire us to help. Fill out our contact form to schedule your move and let us do the heavy lifting.

+ more awesome articles by Evolving Web
Categories: FLOSS Project Planets

Evolving Web: Drupal 10 is Coming. Here’s What to Expect.

Planet Drupal - Tue, 2022-12-13 02:54

The long-awaited Drupal 10 will be released on December 14. Are you ready? While some of us get excited about software updates, we get that some might be overwhelmed by what this new version will change for their sites. 

Here we’ll go over how you can be prepared for the latest version, the benefits of moving and the next steps to get there sooner rather than later.

And don’t worry: if you still need guidance by the end of this post, you can watch our recent Drupal 10 webinar, or request custom training for your team. 

Why Upgrade? Can’t I Just Keep Using an Earlier Version?

Drupal 7 and 9 will become obsolete as of November 2023, and having an outdated version of Drupal or any unsecured CMS means your website is more prone to downtimes and bugs. As an organization, implementing and maintaining security measures on your own can be very expensive. Coupled with the custom infrastructure configurations needed to fix bugs, it will invariably cost more than proactively moving to the latest version of Drupal.

Apart from Drupal 7 and 9 reaching end-of-life, Drupal 10 promises to provide an overall improved Drupal experience for content editors, developers and site owners. 

Drupal 10 makes it easy to create content through CKEditor 5, has a greater standardization for using Drupal as a headless CMS, and future features such as automatic updates will make your platform easy to maintain. Overall, this upgrade will provide a better platform for brands looking to create engaging digital experiences. 

Content Creation Front and Centre with CKEditor 5

D10 continues Drupal’s trend of prioritizing content creators and editors for better front-end development. Central to this trend is CKEditor 5, which was an experimental module introduced in Drupal 9.3 and is the sole WYSIWYG editor in Drupal 10.

This new version provides a significantly improved content authoring experience with in-place controls for object editing. It enables you to easily manage media and tables using advanced features. These include various out-of-the-box core features, including basic formatting and styling as well as advanced productivity features.

Shiny New Themes

Content creators will appreciate Claro, which replaces Seven as the new default admin. Meanwhile, Olivero – replacing Bartik as the new default - makes for a more usable front-end, putting accessibility best practices at the forefront. 

Claro is notably less cluttered with more “white space” right out of the box, making it easier on the eyes, less complex to learn and allows for greater accessibility overall. Olivero, first introduced in Drupal 9.4, is now the default front-end theme for Drupal 10. Featuring a simple and modern design, Olivero is focused on the needs of content creators. It is also WCAG AA compliant right from the start, with an accessible and beautiful interface that features a high-contrast colour palette that’s easier on the eyes.

New Dependencies

On the developer side, Drupal 10 has some noteworthy upgrades, including a new underlying technology stack – Symfony 6.2 – and a new version of PHP. 

PHP 8.1, in addition to being a requirement for Symfony 6.2, promises a longer support lifetime for Drupal 10 as well as more stability and predictability in its dependency requirements.

Additionally, Drupal 10 will use modern JavaScript components to replace some uses of jQuery UI and jQuery, as well as a new version of Twig (3.x), which promises to be a faster, more secure and more flexible PHP template engine. Plus, Drupal 10 will no longer support Internet Explorer 11.

Ready-to-Go Headless

Drupal has long been a dependable CMS for hybrid or fully headless configurations thanks to its support for REST, JSON, and GraphQL APIs, (read more about Headless). Drupal’s Decoupled Menus Initiative has sought to make headless design even easier by improving how JavaScript front ends consume configurable menus managed in Drupal.

Beyond Drupal 10.0, the platform’s headless capacity will be expanded further by adding read-only menus for Drupal HTTP APIs. This will make it easier for front-end developers to consume menu data to build navigation systems.

Site Builders Will Soon Have a Project Browser

Another exciting addition that will come in the next versions of Drupal 10 is Project Browsing, which meets the ultimate goal of taking the mystery out of starting and building a new project in Drupal. This handy feature makes it easier for users – especially novice site builders – to hunt down the perfect modules for their projects. It will have a visual browsing interface within the Drupal admin with a more intuitive filtering system and iconography to convey key quality measures faster. 

Three Cheers for Automatic Updates!

Automatic updates have long been one of the most requested features for inclusion in Drupal. The Automatic Updates Initiative has been one of Drupal’s key strategic initiatives for quite some time, and while not yet available in Drupal 10, is expected in version 10.1 or 10.2. With Drupal releasing new features every 6 months or so, it’s only a matter of time before it’s included. 

The upcoming automatic update module will rely on the following three components:

  • Public safety alerts regarding critical and highly critical updates from Drupal.org
  • Readiness checks, which trigger warnings of issues blocking your website from receiving automatic updates
  • In-place updates, which download the update from Drupal.org, check it and create backups of the files, perform the update and restore your backup files if anything goes wrong

The Automatic Updates module is yet another feature aimed at making life easier for content creators, particularly those who don’t have a development background who are tasked with managing Drupal websites and who lack a routine for checking and running Drupal updates upon release.

Need Help With Upgrading?

If you are already using Drupal 9, the upgrade to 10 will be smooth as the two share the same architectures. If you are still using an older version, you are looking at a migration of your site to Drupal 10, which involves replicating applications from the old product onto the new one.

As easy as it sounds to upgrade or adopt Drupal, sometimes you just need a little support. Follow along with Drupal experts as we do a deep dive and reveal some sneak peeks for Drupal 10 in our recent webinar or reach out to request custom training. We offer custom training for teams like yours, from sales and marketing to software development. 

Regardless of which version of Drupal you’re currently using, or even if you’re thinking of moving to Drupal from a different CMS, you can hire us to help. Fill out our contact form to schedule your move and let us do the heavy lifting.

+ more awesome articles by Evolving Web
Categories: FLOSS Project Planets

Specbee: 7 Drupal Security Strategies you need to implement right away! (Includes top Drupal 9 Security Modules)

Planet Drupal - Tue, 2022-12-13 02:45
7 Drupal Security Strategies you need to implement right away! (Includes top Drupal 9 Security Modules) Shefali Shetty 13 Dec, 2022 Subscribe to our Newsletter Now Subscribe Leave this field blank

Website security is not a one-time goal but an ongoing process that needs constant attention. It is always better to prevent a disaster than to respond to one. With a Drupal website, you can be confident that the Drupal security team will resolve reported security issues.
 
Drupal has powered millions of websites, many of which handle extremely critical data. Unsurprisingly, Drupal has been the CMS of choice for websites that handle critical information like government websites, banking and financial institutions, e-Commerce stores, etc. Drupal security updates and features address all top 10 security risks of OWASP (Open Web Application Security Project). 
 
However, the onus is ultimately on you to ensure your website is secure by following security best practices and implementing continuously evolving security strategies. Read more to find out how.

Drupal Security Vulnerabilities

It goes without saying that the community takes security in Drupal very seriously and keeps releasing Drupal security updates/patches. The Drupal security team is always proactive and ready with patches even before a vulnerability goes public. For example, the Drupal security team released the security vulnerability update - SA-CORE-2018-002 days before it was actually exploited (Drupalgeddon2). Patches and Drupal security updates were soon released, advising Drupal site admins to update their websites.
 
Quoting Dries from one of his blogs on the security vulnerability“The Drupal Security Team follows a "coordinated disclosure policy": issues remain private until there is a published fix. A public announcement is made when the threat has been addressed and a secure version of Drupal core is also available. Even when a bug fix is made available, the Drupal Security Team is very thoughtful with its communication.“


Some interesting insights on Drupal’s vulnerability statistics by CVE Details :

 

1. Keep Calm and Stay Updated – Drupal Security Updates    

The Drupal security team is always on its toes looking out for vulnerabilities. Patches / Drupal security updates are immediately released as soon as they find one. Also, after Drupal 8 and the adoption of continuous innovation, minor releases are more frequent. This has led to easy and quick Drupal updates of a better, more secure version. 
Making sure your Drupal version and modules are up-to-date is really the least you can do to ensure the safety of your website. Drupal contributors are staying on top of things and are always looking for any security threats that could spell disaster. Drupal updates don't just come with new features but also security patches and bug fixes. Drupal security updates and announcements are posted to users’ emails and site admins have to keep their versions updated to ensure security.

2. Administer your inputs 

Interactive websites usually gather input from users. As website admins, unless you manage and handle these inputs appropriately, your website is at a high-security risk. Hackers can inject SQL codes that can cause great harm to your website’s data.
Stopping your users from entering SQL-specific words like “SELECT”, “DROP” or “DELETE” could harm the user experience of your website. Instead, with security in Drupal, you can use escaping or filtering functions available in the database API to strip and filter out such harmful SQL injections. Sanitizing your code is the most crucial step toward a secure Drupal website.

3. Drupal 9 Security How is Drupal 9 helping in building a more robust and secure website? 
  • Symfony – With Drupal 9 adopting the Symfony framework, it opened doors to many more developers other than limiting them to just core Drupal developers. Not only is Symfony a more secure framework, it also brought in more developers with different insights to fix bugs and create security patches.
  • Twig Templates – As we just discussed about sanitizing your code to handle inputs better, here’s to tell you that with Drupal, it has already been taken care of. How? Thanks to Drupal 9’s adoption of Twig as its templating engine. With Twig, you will not need any additional filtering and escaping of inputs as it is automatically sanitized. Additionally, Twig’s enforcement of separate layers between logic and presentation, makes it impossible to run SQL queries or misusing the theme layer.
  • More Secure WYSIWYG - The WYSIWYG editor in Drupal is a great editing tool for users but it can also be misused to carry out attacks like XSS attacks. With Drupal 9 following Drupal security best practices, it now allows for using only filtered HTML formats. Also, to prevent users from misusing images and to prevent CSRF (cross-site request forgery), Drupal’s core text filtering allows users to use only local images.
  • The Configuration Management Initiative (CMI) – This Drupal initiative works out great for site administrators and owners as it allows them to track configuration in code. Any site configuration changes will be tracked and audited, allowing strict control over website configuration.
4. Choose your Drupal modules wisely

Before you install a module, make sure you look at how active it is. Are the module developers active enough? Do they release Drupal security updates often? Has it been downloaded before or are you the first scape- goat? You will find all the mentioned details at the bottom of the modules’ download page. Also ensure your modules are updated and uninstall the ones that you no longer use.

5. Drupal Security Modules to the rescue

Just like layered clothing works better than one thick pullover to keep warm during winter, your website is best protected in a layered approach. Drupal security modules can give your website an extra layer of security around it.

Automatic Updates

This is currently a contributed module but will soon move to core in Drupal 10. The goal of the automatic updates initiative is to provide an easy, safe and secure way to update a Drupal website automatically. It helps automatically update your site with core patches and security releases. Any issues during the update process are detected and reported so you don’t have to find out later.

Login Security

This module ensures security in Drupal by allowing the site administrator to add various restrictions on user login. The Drupal login security module can restrict the number of invalid login attempts before blocking accounts. Access can be denied for IP addresses either temporarily or permanently.

Two-factor Authentication

With this Drupal security module, you can add an extra layer of authentication once your user logs in with a user-id and password. Like entering a code that’s been sent to their mobile phone.

Password Policy

This is a great Drupal security module that lets you add another layer of security to your login forms, thus preventing bots and other security breaches. It enforces certain restrictions on user passwords – like constraints on the length, character type, case (uppercase/lowercase), punctuation, etc. It also forces users to change their passwords regularly (password expiration feature).

Username Enumeration Prevention

By default, Drupal lets you know if the username entered does not exist or exists (if other credentials are wrong). This can be great if a hacker is trying to enter random usernames only to find out one that’s actually valid. This module enables security in Drupal and prevents such attacks by changing the standard error message.

Content Access

As the name suggests, this module lets you give more detailed access control to your content. Each content type can be specified with custom view, edit or delete permissions. You can manage permissions for content types by role and author.

Security Kit

This Drupal security module offers many risk-handling features. Vulnerabilities like cross-site scripting (or sniffing), CSRF, Clickjacking, eavesdropping attacks, and more can be easily handled and mitigated with this Drupal 9 security module.

Captcha

As much as we hate to prove our human-ness, CAPTCHA is probably one of the best Drupal security modules out there to filter unwanted spambots. This Drupal module prevents automated script submissions from spambots and can be used in any web form of a Drupal website

6. Check on your Permissions

Drupal allows you to have multiple roles and users like administrators, authenticated users, anonymous users, editors, etc. In order to fine-tune your website security, each of these roles should be permitted to perform only a certain type of work. For example, an anonymous user should be given the least permissions like viewing content only. Once you install Drupal and/or add more modules, do not forget to manually assign and grant access permissions to each role.

7. Get HTTPS

I bet you already knew that any traffic transmitted over just an HTTP could be snooped and recorded by almost anyone. Information like your login id, password, and other session information can be grabbed and exploited by an attacker. If you have an e-Commerce website, this gets even more critical as it deals with payment and personal details. Installing an SSL certificate on your server will secure the connection between the user and the server by encrypting the data that’s transferred. An HTTPS website can also improve your SEO ranking – which makes it totally worth the investment.

Final Thoughts

As the old adage goes - Expect the best but plan for the worst. By default, Drupal is a very secure content management framework, but you will still need to implement security strategies and follow Drupal security best practices for a good night’s sleep. Drupal 9 brings along a whole new bunch of security features for a more robust and secure website. Nonetheless, keeping your website up-to-date with Drupal security updates is indispensable. Writing clean and secure code plays a significant role in your website security. Choose an expert Drupal development agency that can provide you with effective security strategies and implementation services.

Found this article useful? Here’s a really tiny URL of this article for you to copy, embed, or share: //--> //--> //-->

shorturl.at/ehuxM

Click to Copy URL

Author: Shefali Shetty

​​Meet Shefali Shetty, Director of Marketing at Specbee. An enthusiast for Drupal, she enjoys exploring and writing about the powerhouse. While not working or actively contributing back to the Drupal project, you can find her watching YouTube videos trying to learn to play the Ukulele :)

Drupal 9 Drupal 9 Module Drupal Planet Drupal Development

Leave us a Comment

  Recent Blogs Image 7 Drupal Security Strategies you need to implement right away! (Includes top Drupal 9 Security Modules) Image Starterkit Theme in Drupal 10: Implementing a Better Starting Point for your Theme Image Things you need for a Drupal 8 to Drupal 9 Upgrade - A Checklist Do you want a robust and secure Drupal website ? We've built many and are happy to help. Talk to us Featured Success Stories

Upgrading and consolidating multiple web properties to offer a coherent digital experience for Physicians Insurance

Upgrading the web presence of IEEE Information Theory Society, the most trusted voice for advanced technology

Great Southern Homes, one of the fastest growing home builders in the United States, sees greater results with Drupal 9

View all Case Studies
Categories: FLOSS Project Planets

PreviousNext: Decoupled OpenSearch: A Case Study

Planet Drupal - Mon, 2022-12-12 18:39

Watch the video to learn how our team leveraged a highly available AWS OpenSearch service fronted by React to build lightning-fast, dynamic search interfaces backed by Drupal using Search API.

by adam.bramley / 13 December 2022

At DrupalSouth 2022 in Brisbane, I presented our experiences with Decoupled OpenSearch.

In my talk, I covered

  • our architecture, why it was chosen, and how you can set it up for yourself.

  • an overview of the Search API OpenSearch module.

  • a deep dive into the frontend technologies and methodologies we used to make building decoupled search apps a breeze.

I’ve also made the links from my talk available for you to investigate in your own time.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: digest 0.6.31 on CRAN: snprintf Update

Planet Debian - Mon, 2022-12-12 17:29

Release 0.6.31 of the digest package arrived at CRAN this weekend, and is being uploaded to Debian as well.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, spookyhash, and blake3 algorithms) permitting easy comparison of R language objects. It is a mature and widely-used as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation to quickly identify the various objects.

This release contains the tiny change (in maybe a dozen places) triggered recently by macOS and now checked for by r-devel which consists of replacing (v)sprintf calls with (v)snprintf. No more, no less.

My CRANberries provides the usual summary of changes to the previous version. For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Glyph Lefkowitz: Potato Programming

Planet Python - Mon, 2022-12-12 16:40

One potato, two potato, three potato, four
Five potato, six potato, seven potato, more.

Traditional Children’s Counting Rhyme

Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

Knuth, Donald
“Structured Programming with go to statements”
Computing Surveys, Vol. 6, No. 4, December 1974
(p. 268)
(Emphasis mine)

Knuth’s admonition about premature optimization is such a cliché among software developers at this point that even the correction to include the full context of the quote is itself a a cliché.

Still, it’s a cliché for a reason: the speed at which software can be written is in tension — if not necessarily in conflict — with the speed at which it executes. As Nelson Elhage has explained, software can be qualitatively worse when it is slow, but spending time optimizing an algorithm before getting any feedback from users or profiling the system as a whole can lead one down many blind alleys of wasted effort.

In that same essay, Nelson further elaborates that performant foundations simplify architecture1. He then follows up with several bits of architectural advice that is highly specific to parsing—compilers and type-checkers specifically—which, while good, is hard to generalize beyond “optimizing performance early can also be good”.

So, here I will endeavor to generalize that advice. How does one provide a performant architectural foundation without necessarily wasting a lot of time on early micro-optimization?

Enter The Potato

Many years before Nelson wrote his excellent aforementioned essay, my father coined a related term: “Potato Programming”.

In modern vernacular, a potato is very slow hardware, and “potato programming” is the software equivalent of the same.

The term comes from the rhyme that opened this essay, and is meant to evoke a slow, childlike counting of individual elements as an algorithm operates upon them. it is an unfortunately quite common software-architectural idiom whereby interfaces are provided in terms of scalar values. In other words, APIs that require you to use for loops or other forms of explicit, individual, non-parallelized iteration. But this is all very abstract; an example might help.

For a generic business-logic example, let’s consider the problem of monthly recurring billing. Every month, we pull in the list of all of all subscriptions to our service, and we bill them.

Since our hypothetical company has an account-management team that owns the UI which updates subscriptions and a billing backend team that writes code to interface with 3rd-party payment providers, we’ll create 2 backends, here represented by some Protocols.

Finally, we’ll have an orchestration layer that puts them together to actually run the billing. I will use async to indicate which things require a network round trip:

1 2 3 4 5 6 7 8 9 10 11class SubscriptionService(Protocol): async def all_subscriptions(self) -> AsyncIterable[Subscription]: ... class Subscription(Protocol): account_id: str to_charge_per_month: money class BillingService(Protocol): async def bill_amount(self, account_id: str, amount: money) -> None: ...

To many readers, this may look like an entirely reasonable interface specification; indeed, it looks like a lot of real, public-facing “REST” APIs. An equally apparently-reasonable implementation of our orchestration between them might look like this:

1 2 3async def billing(s: SubscriptionService, b: BillingService) -> None: async for sub in s.all_subscriptions(): await b.bill_amount(sub.account_id, sub.to_charge_per_month)

This is, however, just about the slowest implementation of this functionality that it’s possible to implement. So, this is the bad version. Let’s talk about the good version: no-tato programming, if you will. But first, some backstory.

Some Backstory

My father began his career as an APL programmer, and one of the key insights he took away from APL’s architecture is that, as he puts it:

Computers like to do things over and over again. They like to do things on arrays. They don’t want to do things on scalars. So, in fact, it’s not possible to write a program that only does things on a scalar. [...] You can’t have an ‘integer’ in APL, you can only have an ‘array of integers’. There’s no ‘loop’s, there’s no ‘map’s.

APL, like Python2, is typically executed via an interpreter. Which means, like Python, execution of basic operations like calling functions can be quite slow. However, unlike Python, its pervasive reliance upon arrays meant that almost all of its operations could be safely parallelized, and would only get more and more efficient as more and more parallel hardware was developed.

I said ‘unlike Python’ there, but in fact, my father first related this concept to me regarding a part of the Python ecosystem which follows APL’s design idiom: NumPy. NumPy takes a similar approach: it cannot itself do anything to speed up Python’s fundamental interpreted execution speed3, but it can move the intensive numerical operations that it implements into operations on arrays, rather than operations on individual objects, whether numbers or not.

The performance difference involved in these two styles is not small. Consider this case study which shows a 5828% improvement4 when taking an algorithm from idiomatic pure Python to NumPy.

This idiom is also more or less how GPU programming works. GPUs cannot operate on individual values. You submit a program5 to the GPU, as well as a large array of data6, and the GPU executes the program on that data in parallel across hundreds of tiny cores. Submitting individual values for the GPU to work on would actually be much slower than just doing the work on the CPU directly, due to the bus latency involved to transfer the data back and forth.

Back from the Backstory

This is all interesting for a class of numerical software — and indeeed it works very well there — but it may seem a bit abstract to web backend developers just trying to glue together some internal microservice APIs, or indeed most app developers who aren’t working in those specialized fields. It’s not like Stripe is going to let you run their payment service on your GPU.

However, the lesson generalizes quite well: anywhere you see an API defined in terms of one-potato, two-potato iteration, ask yourself: “how can this be turned into an array”? Let’s go back to our example.

The simplest change that we can make, as a consumer of these potato-shaped APIs, is to submit them in parallel. So if we have to do the optimization in the orchestration layer, we might get something more like this:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25from asyncio import Semaphore, AbstractEventLoop async def one_bill( loop: AbstractEventLoop, sem: Semaphore, sub: Subscription, b: BillingService, ) -> None: await sem.acquire() async def work() -> None: try: await b.bill_amount(sub.account_id, sub.to_charge_per_month) finally: sem.release() loop.create_task(work) async def billing( loop: AbstractEventLoop, s: SubscriptionService, b: BillingService, batch_size: int, ) -> None: sem = Semaphore(batch_size) async for sub in s.all_subscriptions(): await one_bill(loop, sem, sub, b)

This is an improvement, but it’s a bit of a brute-force solution; a multipotato, if you will. We’ve moved the work to the billing service faster, but it still has to do just as much work. Maybe even more work, because now it’s potentially got a lot more lock-contention on its end. And we’re still waiting for the Subscription objects to dribble out of the SubscriptionService potentially one request/response at a time.

In other words, we have used network concurrency as a hack to simulate a performant design. But the back end that we have been given here is not actually optimizable; we do not have a performant foundation. As you can see, we have even had to change our local architecture a little bit here, to include a loop parameter and a batch_size which we had not previously contemplated.

A better-designed interface in the first place would look like this:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21class SubscriptionService(Protocol): async def all_subscriptions( self, batch_size: int, ) -> AsyncIterable[Sequence[Subscription]]: ... class Subscription(Protocol): account_id: str to_charge_per_month: money @dataclass class BillingRequest: account_id: str amount: money class BillingService(Protocol): async def submit_bills( self, bills: Sequence[BillingRequest], ) -> None: ...

Superficially, the implementation here looks slightly more awkward than our naive first attempt:

1 2 3 4 5 6 7 8async def billing(s: SubscriptionService, b: BillingService, batch_size: int) -> None: async for sub_batch in s.all_subscriptions(batch_size): await b.submit_bills( [ BillingRequest(sub.account_id, sub.to_charge_per_month) for sub in subs ] )

However, while the implementation with batching in the backend is approximately as performant as our parallel orchestration implementation, backend batching has a number of advantages over parallel orchestration.

First, backend batching has less internal complexity; no need to have a Semaphore in the orchestration layer, or to create tasks on an event loop. There’s less surface area here for bugs.

Second, and more importantly: backend batching permits for future optimizations within the backend services, which are much closer to the relevant data and can achieve more substantial gains than we can as a client without knowledge of their implementation.

There are many ways this might manifest, but consider that each of these services has their own database, and have got to submit queries and execute transactions on those databases.

In the subscription service, it’s faster to run a single SELECT statement that returns a bunch of results than to select a single result at a time. On the billing service’s end, it’s much faster to issue a single INSERT or UPDATE and then COMMIT for N records at once than to concurrently issue a ton of potentially related modifications in separate transactions.

Potato No Mo

The initial implementation within each of these backends can be as naive and slow as necessary to achieve an MVP. You can do a SELECT … LIMIT 1 internally, if that’s easier, and performance is not important at first. There can be a mountain of potatoes hidden behind the veil of that batched list. In this way, you can avoid the potential trap of premature optimization. Maybe this is a terrible factoring of services for your application in the first place; best to have that prototype in place and functioning quickly so that you can throw it out faster!

However, by initially designing an interface based on lists of things rather than individual things, it’s much easier to hide irrelevant implementation details from the client, and to achieve meaningful improvements when optimizing.

Acknowledgements

This is the first post supported by my Patreon, with a topic suggested by a patron.

  1. It’s a really good essay, you should read it. 

  2. Yes, I know it’s actually bytecode compiled and then run on a custom interpreting VM, but for the purposes of comparing these performance characteristics “interpreted” is a more accurate approximation. Don’t @ me. 

  3. Although, thankfully, a lot of folks are now working very hard on that problem. 

  4. No, not a typo, that’s a 4-digit improvement. 

  5. Typically called a “shader” due to its origins in graphically shading polygons. 

  6. The data may rerepresenting vertices in a 3-D mesh, pixels in texture data, or, in the case of general-purpose GPU programming, “just a bunch of floating-point numbers”. 

Categories: FLOSS Project Planets

Hibernation

Planet KDE - Mon, 2022-12-12 15:23

So, another day, another update. Today I managed to get hibernation working on my Dell XPS13 Plus (9320) running Debian.

So, a quick recap. I’m running Debian testing. I set up the system with Guided – Use entire disk and Setup LVM. This leaves me with an encrypted root partition. What I want to do now, is to put a reasonably sized swap file there, and make the system hibernate to it.

So, first, I created a 35 GB (35840MB) large swapfile as /swapfile. I prefer to create a swapfile slightly larger than RAM, to ensure that everything fits and my machine comes with 32GB of RAM. I used the instructions in the excellent Arch Wiki to set this up. I also edited /etc/fstab, commenting out the swap partition setup by the installer and adding the swapfile.

Then, I followed these instructions to find and set the resume and resume_offset in the /etc/defaults/grub file. I changed the GRUB_CMDLINE_LINUX_DEFAULT parameter.

A quick reboot followed by a sudo swapon --show told me that the swap file was active, so time to hibernate!

sudo systemctl hibernate

This resulted in a lovely Sleep verb "hibernate" not supported error message in bright red. Lovely.

After some poking around I had a look at dmesg and found a reference to kernel lockdown. It turns out that you cannot hibernate if kernel lockdown is active. Being more concerned about battery life than some expert hacker stealing my computer and getting to all my data, I decided to try to get rid of it. Turns out that by disabling secure boot in BIOS does the trick.

So, now this works. Let’s see when I can fix the next issue. I’ve got the audio issue discussed earlier and the web cam left to fix.

Categories: FLOSS Project Planets

Jonathan McDowell: Setting up FreshRSS in a subdirectory

Planet Debian - Mon, 2022-12-12 14:30

Ever since the demise of Google Reader I have been looking for a suitable replacement RSS reader. In the past I used to use Liferea but that was when I used a single desktop machine; these days I want to be able to read on my phone and multiple machines. I moved to Feedly and it’s been mostly ok, but I’m hitting the limit of feeds available in the free tier, and $72/year is a bit more than I can justify to myself. Especially when I have machines already available to me where I could self host something.

The problem, of course, is what to host. It seems the best options are all written in PHP, so I had to get over my adverse knee-jerk reaction to that. I ended up on FreshRSS but if it hadn’t worked out I’d have tried TinyTinyRSS. Of course I’m hosting on Debian, and the machine I chose to use was already running nginx and PostgreSQL. So I needed to install PHP:

$ sudo apt install php7.4-fpm php-curl php-gmp php-intl php-mbstring \ php-pgsql php-xml php-zip

I put my FreshRSS install in /srv/freshrss so I grabbed the 1.20.2 release from GitHub (actually 1.20.1 at the time, but I’ve upgraded to the latest since) and untared it in there. I gave www-data access to the data directory (sudo chown -R www-data /srv/freshrss/data) (yes, yes, I could have created a new user specifically for FreshRSS, but I’ve chosen not to for now). There’s no actual need to configure things up on the filesystem, you can do the initial setup from the web interface. Which is where the trouble came. I’ve been an Apache user since 1998 and as a result it’s what I know and what I go to. nginx is new to me. And I wanted my FreshRSS instance to live in a subdirectory of an existing TLS-enabled host, rather than have it’s own hostname. Now, at least FreshRSS copes with this (unlike far too many other projects), you just have to configure your webserver correctly. Which took me more experimentation than I’d like, but I’ve ended up with the following snippet:

# PHP files handling location ~ ^/freshrss/.+?\.php(/.*)?$ { root /srv/freshrss/p; fastcgi_pass unix:/run/php/php-fpm.sock; fastcgi_split_path_info ^/freshrss(/.+\.php)(/.*)?$; set $path_info $fastcgi_path_info; fastcgi_param PATH_INFO $path_info; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } location ~ ^/freshrss(/.*)?$ { root /srv/freshrss/p; try_files $1 /freshrss$1/index.php$is_args$args; }

Other than the addition of the freshrss prefix this ends up differing slightly from the FreshRSS webserver configuration example. I ended up having to make the path info on the fastcgi_split_path_info optional, and my try_files in the bare directory location directive needed $is_args$args added or I just ended up in a redirect loop because the session parameters didn’t get passed through. I’m sure there’s a better way to do it, but I did a bunch of searching and this is how I ended up making it work.

Before firing up the web configuration I created a suitable database:

$ sudo -Hu postgres psql psql (13.8 (Debian 13.8-0+deb11u1)) Type "help" for help. postgres=# create database freshrss; CREATE DATABASE postgres=# create user freshrss with encrypted password 'hunter2'; CREATE ROLE postgres=# grant all privileges on database freshrss to freshrss; GRANT postgres=# \q

I ran through the local configuration, creating myself a user and adding some feeds, then created a cronjob to fetch updates hourly and keep a log:

# mkdir /var/log/freshrss # chown :www-data /var/log/freshrss # chmod 775 /var/log/freshrss # cat > /etc/cron.d/freshrss-refresh <EOF 33 * * * * www-data /srv/freshrss/app/actualize_script.php > /var/log/freshrss/update-$(date --iso-8601=minutes).log 2>&1 EOF

Experiences so far? Reasonably happy. The interface seems snappy enough, and works well both on mobile and desktop. I’m only running a single user instance at present, but am considering opening it up to some other folk and will see how that scales. And it clearly indicated a number of my feeds that were broken, so I’ve cleaned some up that are still around and deleted the missing ones. Now I just need to figure out what else I should be subscribed to that I’ve been putting off due to the Feedly limit!

Categories: FLOSS Project Planets

Pages