Feeds

PreviousNext: Becoming a Drupal Certified Partner: How commitment to open source drives value and success at PreviousNext

Planet Drupal - Mon, 2024-10-28 20:04

For PreviousNext, the decisions to make contribution part of how we work and to become a Drupal Certified Partner (DCP) have paid off many times over, both in terms of business growth and team development. I would encourage any agency considering it to take the leap. The Drupal ecosystem is a community that gives back as much as you put in, and becoming a DCP is one of the best ways to contribute to its continued success.

by Owen Lansbury / 29 October 2024DCP: a certification that matters

As one of the co-founders of PreviousNext, I’ve seen firsthand how our commitment to open source and our partnership with the Drupal community has shaped who we are as a company and driven our success. Being a Drupal Certified Partner isn’t just a credential; it’s a core element of our business model and a commitment to our clients, our team, and the open source community we rely on. 

Here’s why being a Drupal Certified Partner matters to us, and why I think other Drupal agencies should consider joining the program too...

The early days: how PreviousNext found its path with Drupal

When we founded PreviousNext back in 2009, my co-founder, Kim Pepper, and I had both been working with web technologies since the early days of the web itself. As we looked at the technologies available, Kim’s background in Java and his interest in “serious tech” like Ruby and Python made those tools a natural focus. Then, a project opportunity arose with a leading public broadcaster and I suggested we pitch Drupal.

At the time, people didn’t necessarily recognise Drupal as a serious player in the enterprise tech stack yet, so I had to convince Kim to take a closer look. We ended up winning that project, and before we knew it, Drupal was opening doors to big, new clients. Our decision to specialise in Drupal was cemented in 2010 during our first DrupalCon in San Francisco. 

Image attribution: DrupalCon SF

Walking into the same keynote room as Steve Jobs would announce the latest Apple products, seeing thousands of people and realising the scale of the Drupal community made it clear that this was something much bigger on a global scale than we had ever imagined. A vibrant community of thousands of people was pushing the platform forward and were enthusiastic to help us become a part of it. That moment changed everything for us.

Fostering a culture of contribution

One of our early initiatives was integrating contribution into new team members’ onboarding and professional development process at PreviousNext. Whether they had prior Drupal experience or not, we introduced new hires to the world of Drupal contribution as part of their journey with our team. This helped build their skills, broaden their professional profiles and connect them to the global Drupal community. Our developers quickly became module maintainers and many grew to play key roles in critical areas of the Drupal project.

Our culture of contribution also extended to encouraging team members to speak at DrupalCon and other conferences. We supported those who wanted to share their expertise, recognising that building their personal and our company profile in the community was a valuable form of marketing and growth.

Contribution as …A competitive advantage

From early on, we understood the importance of actively contributing to the Drupal community. Contribution became a core part of our company’s culture and a competitive advantage for us. We adopted a policy inspired by Google at the time, allowing our developers to dedicate 20% of their billable hours to contribution and professional development. This policy attracted the best developers and ensured that our team remained engaged, motivated and on the cutting edge of Drupal development outside of their regular client work.

For PreviousNext, contributing isn’t about checking boxes or chasing credits - it’s a key part of our process and commitment to the Drupal community. As contributors, our team members develop and deepen their skills and have opportunities to collaborate with and be mentored by some of the most brilliant Drupal developers globally. This investment in people is the foundation of our reputation as Australia’s most experienced Drupal agency and gives us a competitive advantage both in our region and internationally.

A hiring and retention advantage

Finding and training new talent is costly for everyone in our industry, so supporting contribution and personal and professional development for our team members is a massive win in this regard. Our employee retention across the entire team is up to triple the industry average, with studies I've read indicating tech industry employee tenure is typically 2–3 years in one company. 

A sales advantage

Contribution helps us sell, too. We quickly realised that contribution gave us a significant edge when pitching to clients. By showing our contributions and involvement in the community, we could demonstrate that we weren’t just Drupal users but actively shaping its future. This deep involvement gave us insights and access to networks beyond what other agencies could offer and it helped us win clients by emphasising our commitment to open source and best practices.

A business advantage

Our profit margin is consistently three times higher than the Ibis World benchmark for Web Design Services in Australia, which we get from our annual independent valuation as an employee-owned company. While we might occasionally lose a pitch on price alone, high-end customers are generally happy to pay a bit more for a stable team with a proven track record of deep experience and high quality outcomes.

As you can see, contribution is not something we view as a business cost at PreviousNext, it's a well-proven business accelerator! 

Contribution benefits clients and strengthens projects

We’ve seen that contributing to Drupal isn’t just about the altruism of 'giving back'; it’s a deeply practical business advantage. When our developers fix a bug or add a feature in Drupal core or modules, they improve the tools that our clients rely on. By committing these improvements back to the community, we ensure that future projects can leverage them without reinventing the wheel. That is, without wasting time and effort to recreate work over and over.

Our commitment to contribution is a big reason why our code adheres to the highest coding standards. Sure, we follow best practices, ensuring that every line we write can be picked up by any other Drupal expert and understood. But also, when you know you’re submitting code publicly for review by Drupal’s Core and Security Teams, it's a strong motivator to deliver high quality work. This transparency and adherence to standards offer clients security: they know that if they choose to work with another agency down the road, the work is maintainable and up to the highest standards. It’s a win for our clients, the broader community, and us.

The long-term value of supporting open source

While many agencies might measure ROI in terms of leads generated or short-term gains, we take a very different approach. Our outlook is simple: if Drupal succeeds in the long term, so does PreviousNext. Whether a client picks us or another DCP, the pie grows for everyone if they stay with Drupal. That’s why we invest in the platform and focus on contributing where we can make the most impact.

Our contributions aren’t centrally directed or micromanaged - each developer follows their passions. This approach fosters engagement and allows developers to shape their contributions around both client work and personal development goals. Recently, our team chose to focus on the Experience Builder initiative - which will be incorporated into the new Drupal CMS - a community-driven project dedicated to making Drupal a best-in-class low/no-code CMS for content creators and ambitious marketers. This decision came from the team, driven by their excitement to make a difference in an area they care about and have the expertise to assist.

Why being a Drupal Certified Partner matters

Becoming a Drupal Certified Partner (DCP) when the program first launched was a natural step in our company's journey. The DCP designation is more than a badge; it recognises our commitment to quality, collaboration and the future of Drupal. Clients look to us for our technical abilities, deep understanding of the ecosystem, and active involvement within it.

This partnership with Drupal also gives us a unique advantage when talking with potential clients. At the end of every pitch, we emphasise that we’re not just users of Drupal - we’re contributors. We understand the ins and outs of the platform, influence the roadmap and can leverage our relationships with an entire network of other Drupal developers around the world. This level of involvement is something we would never achieve as a small Australian company if we were simply downloading and using the software. We bring that value to every project we take on and it has been a significant factor in winning business and building client trust.

You should consider becoming a DCP

For any agency working with Drupal, becoming a DCP isn’t just another badge for your website - it’s a way to amplify your connection to the Drupal community, clients, and the future of the platform. The program provides visibility and demonstrates commitment, giving clients confidence in your skills and dedication to Drupal’s success. DCP status has brought us even closer to the Drupal community, helping us build relationships and leverage a wealth of knowledge and expertise. While it might seem counterintuitive for a company to encourage its competitors to boost their expertise and credentials, Drupal itself benefits tremendously when clients know there's an entire ecosystem of highly qualified vendors who can deliver their projects. Find out more about the Drupal Certified Partner program

Categories: FLOSS Project Planets

Akademy 2024 Experience

Planet KDE - Mon, 2024-10-28 20:00
Getting There!

Wow! What a trip! 20 hours across 2 flights, 2 hours on the train with travel buddies Nate Graham and Bhushan Shah, and several bus "adventures" with Nate Graham to our hotel. The hotel... Let's not go into too much detail, suffice to say it was an absolute mess.

This being my first Akademy in person it was a very anxious experience getting there, but with global roaming on my phone to keep communication flowing and a few travel buddies it was certainly made much better!

But once we were settled in and unpacked, it was off to the first event!

The Welcome Event!

Wow, was it chaos once all the people showed up, but amazing to see so many KDE users and developers! A few locals even popped their head in, confused by the packed out venue. We thankfully managed to get a ride in Adriaan de Groot's smooth E.V to the venue and found a few others after parking.

The place had a great vibe and the free drinks and snack courtesy of KDE went down a treat! I quickly connected to the free Wi-Fi, spun up some translations of the menu and grabbed the Mexican Fries with guacamole, tomatoes and onions. It was amazing with a few drinks to wash it all down!

On to the main event!

Saturday Talks!

Having the talks start a little later was great as it meant we didn't have to get up early and get rushing out the door!

Adriaan also bought Stroopwafels with him! So delicious!

Only Hackers Will Survive

The first talk (barring the brief opening by the Akademy team) was right into the thick of it, and it was about circular economies, electronic waste and how we, the hackers, have the right mindset to keep things working well beyond manufacturers expected lifetimes. Whether they be limited by lack of software updates or pushed out of the market by ever more demanding software performance requirements.

The scenes of landfills around the globe on display with people in Third World countries sorting through them to reclaim precious metals on display to really bring home the true impact. Here's hoping KDE's software and projects can help with this, especially with the Blue Angel Certification. I am really hoping to see Plasma Mobile continue to take charge in this area, where mobile devices are often discarded rapidly in favour of the newest shiniest thing.

Goals Wrap Up!

Every two years, KDE chooses new goals to focus on to make KDE's software better for all. At this year's Akademy it was time to review the goals outcomes for the last two years.

Carl Schwan went through his accessibility goal outcomes which included a discussion about how our hardware partners want to improve accessibility, 190 code changes that were accessibility related and a new accessibility inspector application! There was also notes around the Appium CI/CD test suite and Selenium driver test suite to help with ensure accessibility in our as well as Project Spiel a new text to speech engine for the Free Desktop plus turning accessibility into a permanent goal for KDE.

Nate Graham wrapped up his goals around automation and systematisation. These were about being lazier in a good way, automating the boring trivial things we spend time doing all the time. Creating policies to reduce the debate on issues with opinions impacting what we do, instead of policy. Working as teams instead of alone and documenting how we do things instead of keeping tribal knowledge locked away in our brains.

I wasn't able to attend much of Cornelius Schumacher's KDE Eco goal so I didn't get any notes on that but I do love that KDE cares for the environment and this should be an ongoing goal to make sure our software (and hardware partners) are acting responsibly to ensure a brighter future for the next generations.

Report of the Board

The KDE board assembled in person to give us a run-down of 2023 including all their favourite things that happened like paying members going from 53 to 719, GitHub sponsors up to 132 from 67, and 170 nonmember recurring donations up from 130 on Donorbox! On top of that, the best fundraising campaign ever in December 2023. A monumental task was also completed with the release of Plasma 6 alongside Frameworks and Gear at the same time! The KDE Eco grant was extended, they hired new staff as part of the Make a Living goal, including someone to manage the goals. The new financial situation improved vastly due to a successful Donorbox rollout and fundraising campaign.

In looking to the future the board are encouraging more community members to get on board with the KDE Goals, with the improved finance situation they are hoping to leverage that to continue improving KDE's offerings. Conferences and in-person sprints are now viable post-apocalypse! Ask for funding/organisation from the board if you want to organise/attend one and represent KDE! They are also aiming to move beyond the Make a Living effort and leverage current staff for further work without losing sight of their goals and improve management of staff, contracts and keeping staff on. They're also hoping to expand our app store presence on Google Play, Windows Store and Flathub to get more apps on and investigate possible revenue sources from this.

Adapt or Die: How new Linux packaging approaches affect wider KDE

David Edmundson gave an awesome talk about Linux packaging and where we are headed. Flatpaks! Oh, and I guess Snaps and Appimages and that new one from Deepin… But his talk was generally focussed on how KDE as a community will need to face the challenges of containerised applications since immutable distros are appearing all the time now. In this well researched discussion, he raised several case studies that he has identified as problem points for progress. This was one of my favourite talks, as I am a big proponent of Flatpak as the future of application distribution and plan on working with David to that end.

Looking Back: What's Next

Wait what? What does that even mean? Nicolas Fella decided to share with us his run-down on how the porting from Qt5 to Qt6 went as a whole. From the timeline stretching way back to 2019 to the KDE Megarelease of 2024 including how each step was planned out and then managed into a reality. He also gave us a run-down of the good and the bad from his viewpoint. The good included lots of great new features and loads of pre-release testing. The bad included bugs (but you can never get rid of all of these), controversial decisions and broken distros as of release. He also noted a lack of documentation for third party users of KDE frameworks and some things that got left on the cutting room floor due to a lack of time to get them in. Those have been marked TODO KF7 😂

An Operating System of Our Own

Harald Sitter gave us a run-down of his new Operating System KDE Linux (previously - and my favourite name - Project Banana). This is a new image based distro that uses BTRFS and images of the OS, with easy switching between them. This will be designed for anyone to use, from KDE developers to users and hardware vendors! It will bring apps from Flatpak (my favourite) and Snap to keep the OS and applications separate. I've been watching this for a little while before Akademy so I was happy to see it announced, and it has attracted many people onboard and has accelerated the development of it immensely! Come join us @ #kde-linux:kde.org on Matrix!

A look on the Bright Side of Life

Harald gave us a quick lightning talk about remaining calm and enjoying what we do. Sometimes things are annoying or frustrating but sometimes we need to step back, look at all the awesome things we make and the amazing people we do that with. If in doubt, ask for help or a rubber duck and come back to fight another day.

Sunday Talks!

Sadly I didn't take a lot of notes on Sunday but I attended a few talks.

  • Openwashing - How do we handle (and enforce?) OSS policies in products? by Markus Feilner, Holger Dyroff, Richard Heigl, Leonhard Kugler and Cornelius Schumacher. A discussion on companies using the title and reputation of open source without actually being open or contributing anything to the community.

  • Group Photo - My first time in a KDE related photo!

  • Contributing is more than just code by Kai Uwe Broulik was a really important talk for me as I can't code. It's just not how my brain works, but the topic of his talk resonated with me so much. KDE has lots of code, frameworks, apps, libraries and that requires a lot of code. But there is SO much more work to be done around that code, translations, quality assurance, bug triaging, documentation just to name a tiny few. If you're interested in contributing to KDE, there are LOTS of things you can help with!

  • Financial support for working on KDE Jos van den Oever gave a great talk about how you as a KDE contributor can apply for funding. It was mostly focussed around NLnet who had a representative there to encourage us to apply for funding and explain the application and approvals processes. They also stuck around for the next few days to help anyone who wanted to apply to submit their application!

Daily driving Plasma Mobile and what's still lacking

Another talk that really hits home for me was by Bart Ribbers on Plasma Mobile. We've got amazing stories for our desktop offering, but the mobile space is a huge part of most people's daily lives. So many people don't own a computer any more, and they're living their digital life through a mobile phone. This talk gave insight into Bart's daily life with his Plasma Mobile enabled phone and where we need to narrow our focus to improve the experience.

BoFs & Daytrip

Over the following days I attended many BoFs:

  • Opt Green: Website Bloat and Green Web which we discussed how our website content affects the environment, from intensive JavaScript, oversized or large images increasing CPU usage and bandwidth to the core of the web itself and what Data Centres and traffic hubs use for their electricity, is it renewable or fuelled by dirty fossil fuels

  • Opt Green / KDE Eco which was all about KDE Eco's certification of Okular, their involvement in defining the Blue Angel specification, getting more KDE apps certified and how KDE was recognised as an expert in sustainability in the German parliament!

  • KDE Goals - We care about your Input where discussions happened around how we improve input methods for those who use alternate languages and alphabets to game controllers and digital input tablets and devices.

  • KDE's release schedules was all about starting discussions about when and how we release software, including how we can better cooperate with downstream distros who are kindly distributing our software

  • Plasma Discover by Aleix gave us insight into the recent changes to performance by Harald and himself, and just a great general discussion around Plasma's current state and what is needed for the future of software stores. KDE Goals - Streamlined Application Development Experience was a rather interesting chat about how we currently develop apps and many discussions about how we improve that process, introduce new blood to the community and make it seriously easy to start a new KDE application.

Food!

This worried me in the lead up and on my way to Germany, as I had earlier this year decided to go Vegetarian. However, I was delightfully surprised and the quality and variety of food available, with the food organised by Akademy team being inclusive of my requirements and tasty! The food at University cafeteria was also amazing to my surprise and I learned that they had won awards!

I also thankfully found some students who sold me a crate of Cola for 10 Euros. I grabbed one and gave the rest to the Akademy help desk for them to give out to attendees for a 1 Euro donation ;)

Outro
  • What an amazing event, thank yous in no particular order go to Nate Graham, the KDE e.V for sponsoring my travel, the Akademy team and everyone I met at Akademy for making it an awesome experience especially for someone who hasn't been to a conference before, to all of the KDE community for making awesome software and the Akademy and KDE sponsors who make these events possible!
Categories: FLOSS Project Planets

Ruqola 2.3.1

Planet KDE - Mon, 2024-10-28 20:00

Ruqola 2.3.1 is a feature and bugfix release of the Rocket.chat app.

New features:

  • Use "view-conversation-balloon-symbolic" icon when we have private conversation with multi users
  • Add version in market application information
  • Fix reset password
  • Fix mouse position when QT_SCREEN_SCALE_FACTORS != 1
  • Add missing icons
  • Fix create topic when creating teams
  • Fix discussion count information
  • Remove @ or # when we search user/channel
  • Fix edit message logic

URL: https://download.kde.org/stable/ruqola/
Source: ruqola-2.3.1.tar.xz
SHA256: 99356ec689473cd5bfaca7f8db79ed5978efa8b3427577ba7b35c1b3714d5fcb
Signed by: E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Riddell jr@jriddell.org
https://jriddell.org/jriddell.pgp

Categories: FLOSS Project Planets

Warning: Krita 5.2.6 beta on Android is currently broken

Planet KDE - Mon, 2024-10-28 20:00

On releasing the latest version of Krita in our Android/ChromeOS beta program, we discovered, too, late that there was a problem that could prevent Krita from starting.

Since the Google Play Store Console does not allow revering a release to an earlier version, we are now urgently working on a fix which we will release as soon as possible.

Our apologies for the inconvience.

The currentl nightly builds for Android work again, with some limitations:

  • take care removing the store version of Krita does not remove the application data: your artwork could be lost.
  • in the Nightly builds you need to install any brush presets separately

You can get the night builds here: Krita Next Nightly Builds. You will need to select the package that is right for the architecture of your device.

Installing the nightly builds requires enabling developer mode on your device and needs considerable technical insight.

If you do not feel comfortable with this, please wait until the new official release lands in the play store in a about two days.

Categories: FLOSS Project Planets

PSA: KDecoration API break in Plasma 6.3

Planet KDE - Mon, 2024-10-28 19:00

Fractional scaling is hard. Anyone that had the misfortune of working on it knows that… so it won’t surprise a lot of people that it’s not all figured out yet! Today I’ll talk about the fractional scaling problems with KWin’s server side decorations, and why we need to do an API break to fix it.

What’s the problem?

This is the simplest part. Many decorations have elements that need to be pixel perfect, like outlines that are only a single pixel wide. When they’re not perfectly scaled, or positioned wrongly, that’s sometimes quite visible and annoying:

What causes these issues?

The source of all evil with fractional scaling is also the cause of most issues here: Integer logical coordinates.

Logical coordinates are a way to represent the size of something on the screen in a mostly display-independent way and are quite useful for the size and position of things like windows or the cursor. They’re calculated in a really simple way:

coordinate_logical = coordinate_pixels / scale

With just that equation, there are no problems just yet - you can just multiply the logical coordinate with the display scale, and you get back the original coordinate in pixels. When you round that logical coordinate, and do some calculations with it, things get weird though… let’s look at the concrete example of a window at scale 1.25, and with a 1 pixel wide outline:

unit outline width window width outline width total size total size in pixels (integer) pixels 1 27 1 29 29 fractional logical 0.8 21.6 0.8 23.2 29 integer logical 1 22 1 24 30

As you might’ve guessed, KWin’s decoration plugin API is using integer logical coordinates, and this mismatch between the window size vs. the size of its components causes most of the problems. Just doing a straight forward int -> float conversion isn’t enough to fix this though, a few more changes are needed.

Changes in KWin

KWin will provide decorations with the fractional logical size of windows, provide them with the scale factor they should render for, and use the decoration’s fractional border sizes to position the window and decoration pieces properly in the scene.

Changes in Decorations

Because of the API break, decorations using the C++ API need to be updated to the new KDecoration3 API, or they will not be loaded. A minimalistic port would only need to round all the values, but there will of course still be fractional scaling issues with that.

Assuming you want to make the decoration work properly with fractional scaling, you also need to use the provided scale factor to calculate border sizes, and when painting things with QPainter, you need to take care to snap all geometries to the pixel grid, or anti-aliasing may turn single-pixel lines into a blurry mess.

Note that this work isn’t completed yet, and some additional API changes may happen while we’re breaking the API already. A porting guide with all the changes will be provided before the release of Plasma 6.3.

As Aurorae decorations are just svg files, they are not affected by this API break and will continue to work like before without any changes.

If you have any questions about this change, or about how to port a decoration over to the new API, please reach out to us at #kwin:kde.org on matrix!

Categories: FLOSS Project Planets

GNUnet News: GNUnet 0.22.2

GNU Planet! - Mon, 2024-10-28 19:00
GNUnet 0.22.2

This is a bugfix release for gnunet 0.22.1. It fixes some regressions and minor bugs.

Links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links may be functional early after the release. For direct access try https://ftp.gnu.org/gnu/gnunet/

Categories: FLOSS Project Planets

Paolo Melchiorre: 2025 Django Software Foundation board nomination

Planet Python - Mon, 2024-10-28 19:00

My self-nomination statement for the 2025 Django Software Foundation (DSF) board of directors elections

Categories: FLOSS Project Planets

Community Working Group posts: Nominate someone for the 2025 Aaron Winborn Award

Planet Drupal - Mon, 2024-10-28 15:03

The Drupal Community Working Group is pleased to announce that nominations for the 2025 Aaron Winborn Award are now open. This is your chance to recognize someone for their service, integrity, kindness, and above-and-beyond commitment to the Drupal community.

In addition to receiving a physical award, winners of the award also receive a scholarship and travel stipend for them to attend DrupalCon North America and recognition in a plenary session at the event.

Nominations are now open to everyone in the Drupal community! Whether someone has made an impact locally, regionally, or across the globe, we want you to nominate them. If you know someone who’s made a meaningful difference, big or small, now’s the perfect chance to recognize their contributions.

The Aaron Winborn Award was established to honor the legacy of Aaron Winborn, a long-time Drupal contributor whose battle with Amyotrophic Lateral Sclerosis (ALS), also known as Lou Gehrig's Disease ended on March 24, 2015. Inspired by a suggestion from Hans Riemenschneider (https://www.drupal.org/u/nonprofit), the Community Working Group, with the support of the Drupal Association, created this award to celebrate individuals who embody Aaron's spirit and dedication.

Nominations are open until Friday, March 21, 2025.
A committee consisting of the Community Working Group members (Conflict Resolution Team) as well as past award winners will select a winner from the nominations.

* Current members of the CWG Conflict Resolution Team and previous winners are not eligible for winning the award.

Previous winners of the award are:

Now is your chance to be heard, show, support, and recognize an amazing community member!

Please submit a nomination today! 

Call for Creators!

If you or someone you know is an amazing creator who’d like to help craft one of our future Aaron Winborn Awards, please reach out to the Drupal Community Working Group.

Categories: FLOSS Project Planets

Talking Drupal: Talking Drupal #473 - Color in CSS with Sass

Planet Drupal - Mon, 2024-10-28 15:00

Today we are talking about Color with CSS, Sass, and bringing it all into Drupal with guest Aubrey Sambor . We’ll also cover Navigation Extra Tools as our module of the week.

For show notes visit: https://www.talkingDrupal.com/473

Topics
  • A little career background
  • Why Front end
  • Do you prefer JS or CSS
  • How do colors work today in CSS
  • Is this different from the past
  • What is gamut
  • Can color functions help with contrast
  • What color functions make you the most excited
  • Is Sass still a thing
  • Do you use preprocessors with color functions
  • Post CSS in Drupal
  • Any modules you can recommend to help with CSS colros
  • Any benefit for single directory compontents or web components
Resources Hosts

Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Aubrey Sambor - star-shaped.org starshaped

MOTW Correspondent

Martin Anderson-Clutz - mandclu.com mandclu

  • Brief description:
    • Have you been using the new Navigation module in Drupal core, but wanted some of the useful links previously available in the Admin Toolbar Tools submodule? There’s a module for that
  • Module name/project name:
  • Brief history
    • How old: created in Oct 2024, less than a week ago by friend of the podcast James Shields aka lostcarpark
    • Versions available: 1.0.0-beta3 which works with Drupal 10.3 and 11
  • Maintainership
    • Actively maintained, already 3 releases
    • Security coverage - too new, but hopefully will have in time
    • Test coverage
    • Number of open issues: 8 “open” issues, 4 of which are bugs, but all but one of which are now marked as fixed with the latest release
    • Usage stats:
  • 12 sites
  • Module features and usage
    • With this module enabled, the new left side Navigation menu available in Drupal core will include links to clear caches (all or a specific cache), run cron, and run database updates
    • It’s a good example of a module that does something very specific and very useful, so I wanted to share it with our listeners as quickly as possible
    • I know these functions are ones I’ve been missing in my own Drupal 11 dev sites, so I’m looking forward to using this module right away
Categories: FLOSS Project Planets

Improving Xwayland window resizing

Planet KDE - Mon, 2024-10-28 12:59

One of the quickest ways to determine whether particular application runs using Xwayland is to resize one of its windows and see how it behaves, for example

A script element has been removed to ensure Planet works properly. Please find it in the original post.

While it can be handy for the debugging purposes, overall, it makes the Plasma Wayland session look less polished. So, one of the goals for 6.3 was to fix this visual glitch.

This article will provide some background behind what caused the glitch and how we addressed it. Just in case, here’s the same application, which was shown in a screen cast above, but with the corresponding resizing fixes in:

A script element has been removed to ensure Planet works properly. Please find it in the original post. X11 frame synchronization protocol(s)

On X11, all window changes typically take place immediately, including resizing. This can lead to some issues. For example, if a window is resized, it can take a while until the application repaints the window with the new size. What if the compositing manager decides to compose the screen in meanwhile? You’re likely going to see some sort of visual glitches, e.g. the window contents getting cropped or seeing parts of the window that have not been repainted yet.

In order to address this issue, there exists an X11 protocol to synchronize window repaints during interactive resize. An application/client wishing to participate in this protocol needs to list _NET_WM_SYNC_REQUEST in the WM_PROTOCOLS property of the client window and also set the XID of the XSync counter in the _NET_WM_SYNC_REQUEST_COUNTER property. When the WM wants to resize the window, the following will happen:

  1. The window manager sends a _NET_WM_SYNC_REQUEST client message containing a serial that the client will need to put in the XSync counter after processing a ConfigureNotify event that will be generated after the window is resized. The compositing manager and the window manager will block window updates until the XSync request acknowledgement is received;
  2. The WM resizes the client window, for example by calling the xcb_configure_window() function;
  3. The client would then repaint the window with the new size and update the XSync counter with the serial that it had received in step 1;
  4. The window manager and the compositing manager unblock window updates after receiving receiving the XSync request acknowledgement. For example, now, the window can be repainted by the compositing manager and there shouldn’t be glitches as long as the client behaves well.

Note that the window manager and the compositing manager are often the same. For example, both KWin and Mutter are compositing managers and window managers.

The frame synchronization protocol described above is called basic frame synchronization protocol. There is also an extended frame synchronization protocol, but it is not standardized and it is implemented only by a few compositing managers.

_NET_WM_SYNC_REQUEST and Xwayland

KWin supports the basic frame synchronization protocol, so there should be no visual glitches when resizing X11 windows in the Plasma Wayland session, right? At quick glance, yes, but we forget about the most important detail: Wayland compositors don’t use XCompositeNameWindowPixmap() or xcb_composite_name_window_pixmap() to grab the contents of X11 windows, instead they rely on Xwayland attaching graphics buffers to wl_surface objects, so there is no strict order between the Wayland compositor receiving an XSync request acknowledgement and graphics buffers for the new window size.

In order to help better understand the issue, let’s consider a concrete example. Assume that a window with geometry 0,0 100x100 is being resized by dragging its left edge. If the left edge is dragged 10px to the right, the following will happen:

  1. A _NET_WM_SYNC_REQUEST client message will be sent to the client containing the XSync counter serial that must be set after processing the ConfigureNotify event that will be generated after the Wayland compositor calls xcb_configure_window() with the new window size;
  2. The Wayland compositor calls xcb_configure_window() to actually resize the window;
  3. The client receives the sync request client message and the ConfigureNotify event, repaints the window, and acknowledges the sync request;
  4. The Wayland compositor receives the sync request acknowledgement and updates the window position to 10,0.

But here is the problem, when the window position is updated to 10,0, it’s not guaranteed that the wl_surface associated with the X11 window has a buffer with the new window size, i.e. 90x100. It can take a while until Xwayland commits a graphics buffer with the right size. In meanwhile, the compositor could compose the next frame with the new window position, i.e. 10,0, but old surface size, i.e. 100x100. It would look as if the right window edge sticks out of the window decoration. After Xwayland attaches a buffer with the right size, the right window edge will correct itself.

So, ideally, the Wayland compositor should update the window position after receiving the XSync request acknowledgement and Xwayland attaching a new graphics buffer to the wl_surface.

With that in mind, the frame synchronization procedure looks as follows:

  1. The compositor blocks wl_surface commits by setting the _XWAYLAND_ALLOW_COMMITS property to 0 for the toplevel X11 window. This is needed to ensure the consistent order between XSync request acknowledgements and wl_surface commits. As long as the _XWAYLAND_ALLOW_COMMITS property is set to 0, Xwayland will not attempt to commit the wayland surface, for example attach a new graphics buffer after the client repaints the window;
  2. The compositor sends a _NET_WM_SYNC_REQUEST client message as before;
  3. The compositor resizes the client window as before;
  4. The client repaints the window and acknowledges the XSync request as before;
  5. After receiving the XSync acknowledgement, the compositor unblocks surface commits by setting the _XWAYLAND_ALLOW_COMMITS property to 1. Note that the window updates are still blocked, i.e. the window position is not updated yet;
  6. After Xwayland commits the wl_surface with a new graphics buffer, the window updates are unblocked, e.g. the window position is updated.

The frame synchronization process looks more involved with Xwayland, but it is still manageable.

_NET_WM_SYNC_REQUEST support in applications

Most applications that use GTK and Qt support _NET_WM_SYNC_REQUEST, but there are applications that don’t participate in the frame synchronization protocol. If you use one of those apps, you will observe visual glitches during interactive resize.

Closing words

Frame synchronization is a difficult problem, and requires some very intricate code both on the compositor and the client side. But with the changes that we’ve made, I’m proud to say that KWin is one of the few compositors that properly handles frame synchronization for X11 windows on Wayland!

Categories: FLOSS Project Planets

Sven Hoexter: GKE version 1.31.1-gke.1678000+ is a baddy

Planet Debian - Mon, 2024-10-28 12:43

Just a "warn your brothers" for people foolish enough to use GKE and run on the Rapid release channel.

Update from version 1.31.1-gke.1146000 to 1.31.1-gke.1678000 is causing trouble whenever NetworkPolicy resources and a readinessProbe are configured. As a workaround we started to remove the NetworkPolicy resources. E.g. when kustomize is involved with a patch like this:

- patch: |- $patch: delete apiVersion: "networking.k8s.io/v1" kind: NetworkPolicy metadata: name: dummy target: kind: NetworkPolicy

We tried to update to the latest version - right now 1.31.1-gke.2008000 - which did not change anything. Behaviour is pretty much erratic, sometimes it still works and sometimes the traffic is denied. It also seems that there is some relevant fix in 1.31.1-gke.1678000 because that is now the oldest release of 1.31.1 which I can find in the regular and rapid release channels. The last known good version 1.31.1-gke.1146000 is not available to try a downgrade.

Categories: FLOSS Project Planets

The Open Source Initiative Announces the Release of the Industry’s First Open Source AI Definition

Open Source Initiative - Mon, 2024-10-28 12:02

RALEIGH, N.C., Oct. 28, 2024 — ALL THINGS OPEN 2024 — After a year-long, global, community design process, the Open Source Definition (OSAID) v.1.0 is available for public use.

The release of version 1.0 was announced today at All Things Open 2024, an industry conference focused on common issues of interest to the worldwide Open Source community. The OSAID offers a standard by which community-led, open and public evaluations will be conducted to validate whether or not an AI system can be deemed Open Source AI. This first stable version of the OSAID is the result of multiple years of research and collaboration, an international roadshow of workshops, and a year-long co-design process led by the Open Source Initiative (OSI), globally recognized by individuals, companies and public institutions as the authority that defines Open Source.

“The co-design process that led to version 1.0 of the Open Source AI Definition was well-developed, thorough, inclusive and fair,” said Carlo Piana, OSI board chair. “It adhered to the principles laid out by the board, and the OSI leadership and staff followed our directives faithfully. The board is confident that the process has resulted in a definition that meets the standards of Open Source as defined in the Open Source Definition and the Four Essential Freedoms, and we’re energized about how this definition positions OSI to facilitate meaningful and practical Open Source guidance for the entire industry.”

“The new definition requires Open Source models to provide enough information about their training data so that a ‘skilled person can recreate a substantially equivalent system using the same or similar data,’ which goes further than what many proprietary or ostensibly Open Source models do today,” said Ayah Bdeir, who leads AI strategy at Mozilla. “This is the starting point to addressing the complexities of how AI training data should be treated, acknowledging the challenges of sharing full datasets while working to make open datasets a more commonplace part of the AI ecosystem. This view of AI training data in Open Source AI may not be a perfect place to be, but insisting on an ideologically pristine kind of gold standard that will not actually be met by any model builder could end up backfiring.”

“We welcome OSI’s stewardship of the complex process of defining Open Source AI,” said Liv Marte Nordhaug, CEO of the Digital Public Goods Alliance (DPGA) secretariat. “The Digital Public Goods Alliance secretariat will build on this foundational work as we update the DPG Standard as it relates to AI as a category of DPGs.”

“Transparency is at the core of EleutherAI’s non-profit mission. The Open Source AI Definition is a necessary step towards promoting the benefits of Open Source principles in the field of AI,” said Stella Biderman, executive director at the EleutherAI Institute. “We believe that this definition supports the needs of independent machine learning researchers and promotes greater transparency among the largest AI developers.”

“Arriving at today’s OSAID version 1.0 was a difficult journey, filled with new challenges for the OSI community,” said OSI Executive Director, Stefano Maffulli. “Despite this delicate process, filled with differing opinions and uncharted technical frontiers—and the occasional heated exchange—the results are aligned with the expectations set out at the start of this two-year process. This is a starting point for a continued effort to engage with the communities to improve the definition over time as we develop with the broader Open Source community the knowledge to read and apply OSAID v.1.0.”

The text of the OSAID v.1.0 as well as a partial list of the many global stakeholders who endorse the definition can be found here: https://opensource.org/ai

About the Open Source Initiative
Founded in 1998, the Open Source Initiative (OSI) is a non-profit corporation with global scope formed to educate about and advocate for the benefits of Open Source and to build bridges among different constituencies in the Open Source community. It is the steward of the Open Source Definition and the Open Source AI Definition, setting the foundation for the global Open Source ecosystem. Join and support the OSI mission today at: https://opensource.org/join.

Categories: FLOSS Research

Trey Hunner: Adding keyboard shortcuts to the Python REPL

Planet Python - Mon, 2024-10-28 10:15

I talked about the new Python 3.13 REPL a few months ago and after 3.13 was released. I think it’s awesome.

I’d like to share a secret feature within the Python 3.13 REPL which I’ve been finding useful recently: adding custom keyboard shortcuts.

This feature involves a PYTHONSTARTUP file, use of an unsupported Python module, and dynamically evaluating code.

In short, we may be getting ourselves into trouble. But the result is very neat!

Thanks to Łukasz Llanga for inspiring this post via his excellent EuroPython keynote talk.

The goal: keyboard shortcuts in the REPL

First, I’d like to explain the end result.

Let’s say I’m in the Python REPL on my machine and I’ve typed numbers =:

1 >>> numbers =

I can now hit Ctrl-N to enter a list of numbers I often use while teaching (Lucas numbers):

1 numbers = [2, 1, 3, 4, 7, 11, 18, 29]

That saved me some typing!

Getting a prototype working

First, let’s try out an example command.

Copy-paste this into your Python 3.13 REPL:

1 2 3 4 5 6 7 8 9 10 11 from _pyrepl.simple_interact import _get_reader from _pyrepl.commands import Command class Lucas(Command): def do(self): self.reader.insert("[2, 1, 3, 4, 7, 11, 18, 29]") reader = _get_reader() reader.commands["lucas"] = Lucas reader.bind(r"\C-n", "lucas")

Now hit Ctrl-N.

If all worked as planned, you should see that list of numbers entered into the REPL.

Cool! Now let’s generalize this trick and make Python run our code whenever it starts.

But first… a disclaimer.

Here be dragons 🐉

Notice that _ prefix in the _pyrepl module that we’re importing from? That means this module is officially unsupported.

The _pyrepl module is an implementation detail and its implementation may change at any time in future Python versions.

In other words: _pyrepl is designed to be used by Python’s standard library modules and not anyone else. That means that we should assume this code will break in a future Python version.

Will that stop us from playing with this module for the fun of it?

It won’t.

Creating a PYTHONSTARTUP file

So we’ve made one custom key combination for ourselves. How can we setup this command automatically whenever the Python REPL starts?

We need a PYTHONSTARTUP file.

When Python launches, if it sees a PYTHONSTARTUP environment variable it will treat that environment variable as a Python file to run on startup.

I’ve made a /home/trey/.python_startup.py file and I’ve set this environment variable in my shell’s configuration file (~/.zshrc):

1 export PYTHONSTARTUP=$HOME/.python_startup.py

To start, we could put our single custom command in this file:

1 2 3 4 5 6 7 8 9 10 11 12 13 try: from _pyrepl.simple_interact import _get_reader from _pyrepl.commands import Command except ImportError: pass # Not in the new pyrepl OR _pyrepl implementation changed else: class Lucas(Command): def do(self): self.reader.insert("[2, 1, 3, 4, 7, 11, 18, 29]") reader = _get_reader() reader.commands["lucas"] = Lucas reader.bind(r"\C-n", "lucas")

Note that I’ve stuck our code in a try-except block. Our code only runs if those _pyrepl imports succeed.

Note that this might still raise an exception when Python starts if the reader object’s command attribute or bind method change in a way that breaks our code.

Personally, I’d like to see those breaking changes occur print out a traceback the next time I upgrade Python. So I’m going to leave those last few lines without their own catch-all exception handler.

Generalizing the code

Here’s a PYTHONSTARTUP file with a more generalized solution:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 try: from _pyrepl.simple_interact import _get_reader from _pyrepl.commands import Command except ImportError: pass else: # Hack the new Python 3.13 REPL! cmds = { r"\C-n": "[2, 1, 3, 4, 7, 11, 18, 29]", r"\C-f": '["apples", "oranges", "bananas", "strawberries", "pears"]', } from textwrap import dedent reader = _get_reader() for n, (key, text) in enumerate(cmds.items(), start=1): name = f"CustomCommand{n}" exec(dedent(f""" class _cmds: class {name}(Command): def do(self): self.reader.insert({text!r}) reader.commands[{name!r}] = {name} reader.bind({key!r}, {name!r}) """)) # Clean up all the new variables del _get_reader, Command, dedent, reader, cmds, text, key, name, _cmds, n

This version uses a dictionary to map keyboard shortcuts to the text they should insert.

Note that we’re repeatedly building up a string of Command subclasses for each shortcut, using exec to execute the code for that custom Command subclass, and then binding the keyboard shortcut to that new command class.

At the end we then delete all the variables we’ve made so our REPL will start the clean global environment we normally expect it to have:

1 2 3 4 Python 3.13.0 (main, Oct 8 2024, 10:37:56) [GCC 11.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> dir() ['__annotations__', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__']

Is this messy?

Yes.

Is that a needless use of a dictionary that could have been a list of 2-item tuples instead?

Yes.

Does this work?

Yes.

Doing more interesting and risky stuff

Note that there are many keyboard shortcuts that may cause weird behaviors if you bind them.

For example, if you bind Ctrl-i, your binding may trigger every time you try to indent. And if you try to bind Ctrl-m, your binding may be ignored because this is equivalent to hitting the Enter key.

So be sure to test your REPL carefully after each new binding you try to invent.

If you want to do something more interesting, you could poke around in the _pyrepl package to see what existing code you can use/abuse.

For example, here’s a very hacky way of making a binding to Ctrl-x followed by Ctrl-r to make this import subprocess, type in a subprocess.run line, and move your cursor between the empty string within the run call:

1 2 3 4 5 6 7 8 9 10 11 12 class _cmds: class Run(Command): def do(self): from _pyrepl.commands import backward_kill_word, left backward_kill_word(self.reader, self.event_name, self.event).do() self.reader.insert("import subprocess\n") code = 'subprocess.run("", shell=True)' self.reader.insert(code) for _ in range(len(code) - code.index('""') - 1): left(self.reader, self.event_name, self.event).do() reader.commands["subprocess_run"] = _cmds.Run reader.bind(r"\C-x\C-r", "subprocess_run") What keyboard shortcuts are available?

As you play with customizing keyboard shortcuts, you’ll likely notice that many key combinations result in strange and undesirable behavior when overridden.

For example, overriding Ctrl-J will also override the Enter key… at least it does in my terminal.

I’ll list the key combinations that seem unproblematic on my setup with Gnome Terminal in Ubuntu Linux.

Here are Control key shortcuts that seem to be complete unused in the Python REPL:

  • Ctrl-N
  • Ctrl-O
  • Ctrl-P
  • Ctrl-Q
  • Ctrl-S
  • Ctrl-V

Note that overriding Ctrl-H is often an alternative to the backspace key

Here are Alt/Meta key shortcuts that appear unused on my machine:

  • Alt-A
  • Alt-E
  • Alt-G
  • Alt-H
  • Alt-I
  • Alt-J
  • Alt-K
  • Alt-M
  • Alt-N
  • Alt-O
  • Alt-P
  • Alt-Q
  • Alt-S
  • Alt-V
  • Alt-W
  • Alt-X
  • Alt-Z

You can add an Alt shortcut by using \M (for “meta”). So r"\M-a" would capture Alt-A just as r"\C-a" would capture Ctrl-A.

Here are keyboard shortcuts that can be customized but you might want to consider whether the current default behavior is worth losing:

  • Alt-B: backward word (same as Ctrl-Left)
  • Alt-C: capitalize word (does nothing on my machine…)
  • Alt-D: kill word (delete to end of word)
  • Alt-F: forward word (same as Ctrl-Right)
  • Alt-L: downcase word (does nothing on my machine…)
  • Alt-U: upcase word (does nothing on my machine…)
  • Alt-Y: yank pop
  • Ctrl-A: beginning of line (like the Home key)
  • Ctrl-B: left (like the Left key)
  • Ctrl-E: end of line (like the End key)
  • Ctrl-F: right (like the Right key)
  • Ctrl-G: cancel
  • Ctrl-H: backspace (same as the Backspace key)
  • Ctrl-K: kill line (delete to end of line)
  • Ctrl-T: transpose characters
  • Ctrl-U: line discard (delete to beginning of line)
  • Ctrl-W: word discard (delete to beginning of word)
  • Ctrl-Y: yank
  • Alt-R: restore history (within history mode)
What fun have you found in _pyrepl?

Find something fun while playing with the _pyrepl package’s inner-workings?

I’d love to hear about it! Comment below to share what you found.

Categories: FLOSS Project Planets

Real Python: Beautiful Soup: Build a Web Scraper With Python

Planet Python - Mon, 2024-10-28 10:00

Web scraping is the automated process of extracting data from the internet. The Python libraries Requests and Beautiful Soup are powerful tools for the job. To effectively harvest the vast amount of data available online for your research, projects, or personal interests, you’ll need to become skilled at web scraping.

In this tutorial, you’ll learn how to:

  • Inspect the HTML structure of your target site with your browser’s developer tools
  • Decipher data encoded in URLs
  • Use Requests and Beautiful Soup for scraping and parsing data from the internet
  • Step through a web scraping pipeline from start to finish
  • Build a script that fetches job offers from websites and displays relevant information in your console

If you like learning with hands-on examples and have a basic understanding of Python and HTML, then this tutorial is for you! Working through this project will give you the knowledge and tools you need to scrape any static website out there on the World Wide Web. You can download the project source code by clicking on the link below:

Get Your Code: Click here to download the free sample code that you’ll use to learn about web scraping in Python.

Take the Quiz: Test your knowledge with our interactive “Beautiful Soup: Build a Web Scraper With Python” quiz. You’ll receive a score upon completion to help you track your learning progress:

Interactive Quiz

Beautiful Soup: Build a Web Scraper With Python

In this quiz, you'll test your understanding of web scraping using Python. By working through this quiz, you'll revisit how to inspect the HTML structure of a target site, decipher data encoded in URLs, and use Requests and Beautiful Soup for scraping and parsing data from the Web.

What Is Web Scraping?

Web scraping is the process of gathering information from the internet. Even copying and pasting the lyrics of your favorite song can be considered a form of web scraping! However, the term “web scraping” usually refers to a process that involves automation. While some websites don’t like it when automatic scrapers gather their data, which can lead to legal issues, others don’t mind it.

If you’re scraping a page respectfully for educational purposes, then you’re unlikely to have any problems. Still, it’s a good idea to do some research on your own to make sure you’re not violating any Terms of Service before you start a large-scale web scraping project.

Reasons for Automated Web Scraping

Say that you like to surf—both in the ocean and online—and you’re looking for employment. It’s clear that you’re not interested in just any job. With a surfer’s mindset, you’re waiting for the perfect opportunity to roll your way!

You know about a job site that offers precisely the kinds of jobs you want. Unfortunately, a new position only pops up once in a blue moon, and the site doesn’t provide an email notification service. You consider checking up on it every day, but that doesn’t sound like the most fun and productive way to spend your time. You’d rather be outside surfing real-life waves!

Thankfully, Python offers a way to apply your surfer’s mindset. Instead of having to check the job site every day, you can use Python to help automate the repetitive parts of your job search. With automated web scraping, you can write the code once, and it’ll get the information that you need many times and from many pages.

Note: In contrast, when you try to get information manually, you might spend a lot of time clicking, scrolling, and searching, especially if you need large amounts of data from websites that are regularly updated with new content. Manual web scraping can take a lot of time and be highly repetitive and error-prone.

There’s so much information on the internet, with new information constantly being added. You’ll probably be interested in some of that data, and much of it is out there for the taking. Whether you’re actually on the job hunt or just want to automatically download all the lyrics of your favorite artist, automated web scraping can help you accomplish your goals.

Challenges of Web Scraping

The internet has grown organically out of many sources. It combines many different technologies, styles, and personalities, and it continues to grow every day. In other words, the internet is a hot mess! Because of this, you’ll run into some challenges when scraping the web:

  • Variety: Every website is different. While you’ll encounter general structures that repeat themselves, each website is unique and will need personal treatment if you want to extract the relevant information.

  • Durability: Websites constantly change. Say you’ve built a shiny new web scraper that automatically cherry-picks what you want from your resource of interest. The first time you run your script, it works flawlessly. But when you run the same script a while later, you run into a discouraging and lengthy stack of tracebacks!

Unstable scripts are a realistic scenario because many websites are in active development. If a site’s structure changes, then your scraper might not be able to navigate the sitemap correctly or find the relevant information. The good news is that changes to websites are often small and incremental, so you’ll likely be able to update your scraper with minimal adjustments.

Still, keep in mind that the internet is dynamic and keeps on changing. Therefore, the scrapers you build will probably require maintenance. You can set up continuous integration to run scraping tests periodically to ensure that your main script doesn’t break without your knowledge.

An Alternative to Web Scraping: APIs

Some website providers offer application programming interfaces (APIs) that allow you to access their data in a predefined manner. With APIs, you can avoid parsing HTML. Instead, you can access the data directly using formats like JSON and XML. HTML is primarily a way to visually present content to users.

When you use an API, the data collection process is generally more stable than it is through web scraping. That’s because developers create APIs to be consumed by programs rather than by human eyes.

The front-end presentation of a site might change often, but a change in the website’s design doesn’t affect its API structure. The structure of an API is usually more permanent, which means it’s a more reliable source of the site’s data.

However, APIs can change as well. The challenges of both variety and durability apply to APIs just as they do to websites. Additionally, it’s much harder to inspect the structure of an API by yourself if the provided documentation lacks quality.

Read the full article at https://realpython.com/beautiful-soup-web-scraper-python/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Drupal life hack's: Secrets of Secure Development in Drupal: Key Functions

Planet Drupal - Mon, 2024-10-28 09:41
Secrets of Secure Development in Drupal: Key Functions admin Mon, 10/28/2024 - 15:41
Categories: FLOSS Project Planets

Real Python: Quiz: Beautiful Soup: Build a Web Scraper With Python

Planet Python - Mon, 2024-10-28 08:00

In this quiz, you’ll test your understanding of web scraping with Python, Requests, and Beautiful Soup.

By working through this quiz, you’ll revisit how to inspect the HTML structure of your target site with your browser’s developer tools, decipher data encoded in URLs, use Requests and Beautiful Soup for scraping and parsing data from the Web, and gain an understanding of what a web scraping pipeline looks like.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Golems GABB: Using JavaScript Frameworks - React, Vue, Angular in Drupal

Planet Drupal - Mon, 2024-10-28 05:44
Using JavaScript Frameworks - React, Vue, Angular in Drupal Editor Mon, 10/28/2024 - 15:03

The integration of JavaScript frameworks, like React, Vue, and Angular, with Drupal has sparked a wave of creativity and innovation. It goes beyond building websites. This blog explores the benefits and methods of integrating these frameworks with Drupal, demonstrating how this fusion enhances front-end development and user engagement.
Traditional static websites and strict CMS constraints are becoming a thing of the past. Nowadays, developers are embracing the adaptability and engagement provided by JavaScript frameworks to design UI within the Drupal environment.

Drupal's Frontend Landscape: General Overview

To really understand how JavaScript frameworks such as React, Vue, and Angular work with Drupal, you need to know about the frontend environment of Drupal and the difficulties developers meet while creating complicated user interfaces in this strong content management system.

Categories: FLOSS Project Planets

Thomas Lange: 30.000 FAIme jobs created in 7 years

Planet Debian - Mon, 2024-10-28 05:32

The number of FAIme jobs has reached 30.000. Yeah!
At the end of this November the FAIme web service for building customized ISOs turns 7 years old. It had reached 10.000 jobs in March 2021 and 20.000 jobs were reached in June 2023. A nice increase of the usage.

Here are some statistics for the jobs processed in 2024:

Type of jobs 3%     cloud image 11%     live ISO 86%     install ISO Distribution 2%     bullseye 8%     trixie 12%     ubuntu 24.04 78%     bookworm Misc
  • 18%   used a custom postinst script
  • 11%   provided their ssh pub key for passwordless root login
  • 50%   of the jobs didn't included a desktop environment at all, the others used GNOME, XFCE or KDE or the Ubuntu desktop the most.
  • The biggest ISO was a FAIme job which created a live ISO with a desktop and some additional packages This job took 30min to finish and the resulting ISO was 18G in size.
Execution Times

The cloud and live ISOs need more time for their creation because the FAIme server needs to unpack and install all packages. For the install ISO the packages are only downloaded. The amount of software packages also affects the build time. Every ISO is build in a VM on an old 6-core E5-1650 v2. Times given are calculated from the jobs of the past two weeks.

Job type     Avg     Max install no desktop     1 min     2 min install GNOME     2 min     5 min

The times for Ubuntu without and with desktop are one minute higher than those mentioned above.

Job type     Avg     Max live no desktop     4 min     6 min live GNOME     8 min     11 min

The times for cloud images are similar to live images.

A New Feature

For a few weeks now, the system has been showing the number of jobs ahead of you in the queue when you submit a job that cannot be processed immediately.

The Next Milestone

At the end of this years the FAI project will be 25 years old. If you have a success story of your FAI usage to share please post it to the linux-fai mailing list or send it to me. Do you know the FAI questionnaire ? A lot of reports are already available.

Here's an overview what happened in the past 20 years in the FAI project.

About FAIme

FAIme is the service for building your own customized ISO via a web interface. You can create an installation or live ISO or a cloud image. Several Debian releases can be selected and also Ubuntu server or Ubuntu desktop installation ISOs can be customized. Multiple options are available like selecting a desktop and the language, adding your own package list, choosing a partition layout, adding a user, choosing a backports kernel, adding a postinst script and some more.

Categories: FLOSS Project Planets

Python Bytes: #407 Back to the future, destination 3.14

Planet Python - Mon, 2024-10-28 04:00
<strong>Topics covered in this episode:</strong><br> <ul> <li><strong><a href="https://pythoninsider.blogspot.com/2024/10/python-3140-alpha-1-is-now-available.html?featured_on=pythonbytes">Python 3.14.0 alpha 1 is now available</a></strong></li> <li><a href="https://github.com/astral-sh/uv/pull/8272?featured_on=pythonbytes"><strong>uv supports dependency groups</strong></a></li> <li><strong><a href="https://github.com/wagoodman/dive?featured_on=pythonbytes">dive: A tool for exploring each layer in a docker image</a></strong></li> <li><a href="https://pypi.org/project/pytest-metadata/?featured_on=pythonbytes"><strong>pytest-metadata</strong></a></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=70OO7BMV1KE' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="407">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a> &amp; <a href="https://courses.pythontest.com/hello-pytest?featured_on=pythonbytes"><strong>Hello, pytest!</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy"><strong>@mkennedy@fosstodon.org</strong></a></li> <li>Brian: <a href="https://fosstodon.org/@brianokken"><strong>@brianokken@fosstodon.org</strong></a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes"><strong>@pythonbytes@fosstodon.org</strong></a></li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1:</strong> <a href="https://pythoninsider.blogspot.com/2024/10/python-3140-alpha-1-is-now-available.html?featured_on=pythonbytes">Python 3.14.0 alpha 1 is now available</a></p> <ul> <li>First of seven planned alpha releases.</li> <li>Many new features for Python 3.14 are still being planned and written. Among the new major new features and changes so far: <ul> <li><a href="https://peps.python.org/pep-0649/?featured_on=pythonbytes">PEP 649</a>: <a href="https://docs.python.org/3.14/whatsnew/3.14.html#pep-649-deferred-evaluation-of-annotations">deferred evaluation of annotations</a></li> <li><a href="https://docs.python.org/3.14/whatsnew/3.14.html#improved-error-messages">Improved error messages</a></li> </ul></li> </ul> <p><strong>Brian #2:</strong> <a href="https://github.com/astral-sh/uv/pull/8272?featured_on=pythonbytes"><strong>uv supports dependency groups</strong></a></p> <ul> <li><a href="https://pythonbytes.fm/episodes/show/406/whats-on-django-tv-tonight">we covered dependency groups in episode 406</a></li> <li>as of <a href="https://github.com/astral-sh/uv/blob/main/CHANGELOG.md?featured_on=pythonbytes">0.4.27</a>, uv supports dependency groups</li> <li>docs show <a href="https://docs.astral.sh/uv/concepts/dependencies/?featured_on=pythonbytes">how to add dependencies</a> with uv add --group <ul> <li>also “The --dev, --only-dev, and --no-dev flags are equivalent to --group dev, --only-group dev, and --no-group dev respectively.”</li> </ul></li> <li>To install a group, uv pip install --group doesn’t work yet. <ul> <li>It’s waiting for PyPA to decide on an interface for pip, and uv pip will use that interface.</li> </ul></li> <li>But sync works. <pre><code>$ uv init # create a pyproject.toml $ uv add --group foo pytest $ uv venv # create venv $ uv sync --group foo # will install all dependencies, including group "foo" </code></pre></li> </ul> <p><strong>Michael #3:</strong> <a href="https://github.com/wagoodman/dive?featured_on=pythonbytes">dive: A tool for exploring each layer in a docker image</a></p> <ul> <li>via Mike Fiedler</li> <li>Features: <ul> <li>Show Docker image contents broken down by layer</li> <li>Indicate what's changed in each layer</li> <li>Estimate "image efficiency"</li> <li>Quick build/analysis cycles</li> <li>CI Integration</li> </ul></li> </ul> <p><strong>Brian #4:</strong> <a href="https://pypi.org/project/pytest-metadata/?featured_on=pythonbytes"><strong>pytest-metadata</strong></a></p> <ul> <li>An incredibly useful plugin for adding, you guessed it, metadata, to your pytest results.</li> <li>Required for <a href="https://pypi.org/project/pytest-html/?featured_on=pythonbytes">pytest-html</a> but also useful on it’s own</li> <li>Adds metadata to <ul> <li>text output with --verbose</li> <li>xml output when using --junit-xml, handy for CI systems that support junit.xml</li> </ul></li> <li>Other plugins depend on this and report in other ways, such as pytest-html</li> <li>By default, already grabs <ul> <li>Python version</li> <li>Platform info</li> <li>List of installed packages</li> <li>List of installed pytest plugins</li> </ul></li> <li>You can add your own metadata</li> <li>You can access all metadata (and add to it) from tests, fixtures, and hook functions via a metadata fixture.</li> <li>This is in the <a href="https://pythontest.com/top-pytest-plugins/?featured_on=pythonbytes">Top pytest Plugins list</a>, currently #5.</li> </ul> <p><strong>Extras</strong> </p> <p>Brian:</p> <ul> <li>I’ve started filtering deprecated plugins from <a href="https://pythontest.com/top-pytest-plugins/?featured_on=pythonbytes">the pytest plugin list</a>.</li> <li>I’m also going to start reviewing the list and pulling out interesting plugins as the topic of the <a href="https://testandcode.com?featured_on=pythonbytes">next season of Test &amp; Code</a>.</li> </ul> <p>Michael:</p> <ul> <li><a href="https://mastodon.social/@hugovk/113312137194438039?kjy=spring&featured_on=pythonbytes">Pillow 11 is out</a></li> <li><a href="https://hachyderm.io/@graham_knapp/113351051856672146?featured_on=pythonbytes">pip install deutschland</a></li> <li><a href="https://talkpython.fm/blog/?featured_on=pythonbytes">Talk Python has a dedicated blog</a>, please subscribe!</li> </ul> <p><strong>Joke:</strong> Dog names</p>
Categories: FLOSS Project Planets

Zato Blog: Salesforce API integrations and connected apps

Planet Python - Mon, 2024-10-28 03:43
Salesforce API integrations and connected apps 2024-10-28, by Dariusz Suchojad Overview

This instalment in a series of articles about API integrations with Salesforce covers connected apps - how to create them and how to obtain their credentials needed to exchange REST messages with Salesforce.

In Salesforce's terminology, a connected app is, essentially, an API client. It has credentials, a set of permissions, and it works on behalf of a user in an automated manner.

In particular, the kind of a connected app that I am going to create below is one that can be used in backend, server-side integrations that operate without any direct input from end users or administrators, i.e. the app is created once, its permissions and credentials are set once, and then it is able to work uninterrupted in the background, on server side.

Server-side systems are quite unlike other kinds of apps, such as mobile ones, that assume there is a human operator involved - they have their own work characteristics, related yet different, and I am not going to cover them here.

Note that permission types and their scopes are a separate, broad subject and they will described in a separate how-to article.

Finally, I assume that you are either an administrator in a Salesforce organization or that you are preparing information for another person with similar grants in Salesforce.

Conceptually, there is nothing particularly unusual about Salesforce connected apps, it is just its own mini-world of jargon and, at the end of the day, it simply enables you to invoke APIs that Salesforce is built on. It is just that knowing where to click, what to choose and how to navigate the user interface can be a daunting challenge that this article hopes to make easier to overcome.

The steps

For an automated, server-side connected app to make use of Salesforce APIs, the requirements are:

  • Having access to username/password credentials
  • Creating a connected app
  • Granting permissions to the app (not covered in this article)
  • Obtaining a customer key and customer secret for the app

You will note that there are four credentials in total:

  • Username
  • Password
  • Customer key
  • Customer secret

Also, depending on what chapter of the Salesforce documentation you are reading, you will note that the customer key can be also known as "client_id" whereas another name for the customer secret is "client_secret". These two pairs mean the same.

Access to username/password credentials

For starters, you need to have an account in Salesforce, a combination of username + password that you can log in with and on whose behalf the connected app will be created:

Creating a connected app

Once you are logged in, go to Setup in the top right-hand corner:

In the search box, look up "app manager":

Next, click the "New Connected App" button to the right:

Fill out the basic details such as "Connect App Name" and make sure that you select "Enable OAuth Settings". Then, given that in this document we are not dealing with the subject of permissions at all, grant full access to the connected app and finally click "Save" at the bottom of the page.

Obtaining a customer key and customer secret

We have a connected app but we still do not know what its customer key and secret are. To reveal it, go to the "App Manager" once more, either via the search box or using the menu on the left hand side.

Find your app in the list and click "View" in the list of actions. Observe that it is "View", not "Edit" or "Manage", where you can check what the credentials are:

The customer key and secret van be now revealed in the "API (Enable OAuth Settings)" section:

This concludes the process - you have a connected app and all the credentials needed now.

Testing

Seeing as this document is part of a series of how-tos in the context of Zato, if you would like to integrate with Salesforce in Python, at this point you will be able to follow the steps in another where everything is detailed separately.

Just as a quick teaser, it would look akin to the below.

... # Salesforce REST API endpoint to invoke path = '/sobjects/Campaign/' # Build the request to Salesforce based on what we received request = { 'Name': input.name, 'Segment__c': input.segment, } # Create a reference to our connection definition .. salesforce = self.cloud.salesforce['My Salesforce Connection'] # .. obtain a client to Salesforce .. with salesforce.conn.client() as client: # type: SalesforceClient # .. create the campaign now. response = client.post(path, request) ...

On a much lower level, however, if you would just like to quickly test out whether you configured the connected app correctly, you can invoke from command line a Salesforce REST endpoint that will return an OAuth token, as below.

Note that, as I mentioned it previously, client_id is the same as customer key and client_secret is the same as customer secret.

curl https://example.my.salesforce.com/services/oauth2/token \ -H "X-PrettyPrint: 1" \ --header 'Content-Type: application/x-www-form-urlencoded' \ --data-urlencode 'grant_type=password' \ --data-urlencode 'username=hello@example.com' \ --data-urlencode 'password=my.password' \ --data-urlencode 'client_id=my.customer.key' \ --data-urlencode 'client_secret=my.client.secret'

The result will be, for instance:

{ "access_token" : "008e0000000PTzLPb!4Vzm91PeIWJo.IbPzoEZf2ygEM.6cavCt0YwAGSM", "instance_url" : "https://example.my.salesforce.com", "id" : "https://login.salesforce.com/id/008e0000000PTzLPb/0081fSUkuxPDrir000j1", "token_type" : "Bearer", "issued_at" : "1649064143961", "signature" : "dwb6rwNIzl76kZq8lQswsTyjW2uwvTnh=" }

Above, we have an OAuth bearer token on output - this can be used in subsequent, business REST calls to Salesforce but how to do it exactly in practice is left for another article.

Next steps:

➤ Read about how to use Python to build and integrate enterprise APIs that your tests will cover
➤ Python API integration tutorial
Python Integration platform as a Service (iPaaS)
What is an Enterprise Service Bus (ESB)? What is SOA?

More blog posts
Categories: FLOSS Project Planets

Pages