FLOSS Project Planets
The Drop Times: Nneka Hector Reflecting on DrupalGovCon 2023
Matt Layman: Design and Stripe - Building SaaS with Python and Django #180
FSF Blogs: The board process, the GNU Cauldron, SaaSS, and more
The board process, the GNU Cauldron, SaaSS, and more
Python Insider: Python 3.13.0 alpha 3 is now available.
We silently skipped releasing in December (it was too close to the holidays, a lot of people were away) so by date you may have been expecting alpha 4, but instead it’s alpha 3:
https://www.python.org/downloads/release/python-3130a3/This is an early developer preview of Python 3.13
Major new features of the 3.13 series, compared to 3.12Python 3.13 is still in development. This release, 3.13.0a3, is the third of six planned alpha releases.
Alpha releases are intended to make it easier to test the current state of new features and bug fixes and to test the release process.
During the alpha phase, features may be added up until the start of the beta phase (2024-05-07) and, if necessary, may be modified or deleted up until the release candidate phase (2024-07-30). Please keep in mind that this is a preview release and its use is not recommended for production environments.
Many new features for Python 3.13 are still being planned and written. Work continues apace on both the work to remove the Global Interpeter Lock , and to improve Python performance. The most notable changes so far:
- In the interactive interpreter, exception tracebacks are now colorized by default .
- Docstrings now have their leading indentation stripped , reducing memory use and the size of .pyc files. (Most tools handling docstrings already strip leading indentation.)
- PEP 594 (Removing dead batteries from the standard library) scheduled removals of many deprecated modules: aifc, audioop, chunk, cgi, cgitb, crypt, imghdr, mailcap, msilib, nis, nntplib, ossaudiodev, pipes, sndhdr, spwd, sunau, telnetlib, uu, xdrlib, lib2to3.
- Many other removals of deprecated classes, functions and methods in various standard library modules.
- New deprecations , most of which are scheduled for removal from Python 3.15 or 3.16.
- C API removals and deprecations. (Some removals present in alpha 1 have been reverted in alpha 2, as the removals were deemed too disruptive at this time.)
(Hey, fellow core developer, if a feature you find important is missing from this list, let Thomas know.)
The next pre-release of Python 3.13 will be 3.13.0a4, currently scheduled for 2023-02-13.
More resources- Online Documentation
- PEP 719 , 3.13 Release Schedule
- Report bugs at Issues · python/cpython · GitHub.
- Help fund Python and its community.
Thanks to all of the many volunteers who help make Python Development and these releases possible! Please consider supporting our efforts by volunteering yourself or through organization contributions to the Python Software Foundation.
Regards from snowy Amsterdam,Your release team,
Thomas Wouters
Ned Deily
Steve Dower
Łukasz Langa
EuroPython: EuroPython is going to Prague this year 8-14th July! 🇨🇿
Hello fellow Pythonista! 🐍
Put the date in your calendar!
We’re delighted to announce that EuroPython 2024 will be held again in Prague between the 8th and 14th of July, 2024. To stay up to date with the latest news please visit our website and we encourage you to sign up to our monthly community newsletter.
A banner depicting the logo and the dates for EuroPython 2024.Sitting on the Vltava river, the city boasts numerous cultural, culinary, architectural, artistic, civic and tourist attractions. The beautiful historic centre is a UNESCO world heritage site. So, in addition to the conference, there&aposs a LOT to see, do and enjoy. Prague has an extensive and modern public transport system, so travelling around won&apost be a problem and getting there is easy via well serviced train, air and road infrastructure. With an average July high of 25°c, remember to pack your shorts and sandals with your laptop, along with a spirit of adventure to explore one of the most visited and vibrant cities in Europe.
As with previous years, Monday and Tuesday (8th and 9th) will be for tutorials and workshops; the main conference talks will take place on Wednesday, Thursday and Friday (10th, 11th and 12th) with Saturday and Sunday (13th and 14th) for community sprints.
Our prospectus for sponsorship will be forthcoming soon, but if you already want to sponsor one of Europe’s biggest, friendliest and longest running community organised software development conferences, please do reach out to us at sponsoring@europython.eu and we’d be delighted to help.
Because EuroPython is a community organised conference, our volunteers are at the heart of everything we do. Without them, there simply wouldn’t be a conference. We’ll soon start recruiting volunteers, who are organised into teams responsible for different aspects of the conference. To register your interest please email volunteers@europython.eu.
Can’t wait for EuroPython? come check out past years photos from on our website: https://ep2024.europython.eu/
We look forward to seeing you in Prague,
VB on behalf of the EuroPython Board.
Security advisories: Drupal core - Moderately critical - Denial of Service - SA-CORE-2024-001
The Comment module allows users to reply to comments. In certain cases, an attacker could make comment reply requests that would trigger a denial of service (DOS).
Sites that do not use the Comment module are not affected.
Solution:Install the latest version:
- If you are using Drupal 10.2, update to Drupal 10.2.2.
- If you are using Drupal 10.1, update to Drupal 10.1.8.
All versions of Drupal 10 prior to 10.1 are end-of-life and do not receive security coverage. (Drupal 8 and Drupal 9 have both reached end-of-life.)
Drupal 7 is not affected.
Reported By: Fixed By:- Lee Rowlands of the Drupal Security Team
- Benji Fisher of the Drupal Security Team
- Juraj Nemec of the Drupal Security Team
- xjm of the Drupal Security Team
- Lauri Eskola, provisional member of the Drupal Security Team
FSF Events: Free Software Directory meeting on IRC: Friday, January 19, starting at 12:00 EST (17:00 UTC)
Drupal Association blog: Top Drupal accessibility modules for enhancing digital inclusivity
This post is brought to you from our partners at Skynet Technologies.
Uplifting the digital experience of your Drupal website by making it accessible is inevitable.
The reason behind digital evolution is its easy availability for all. But unfortunately, the web is still full of inaccessible experiences, which become a hindrance for users with any sort of disability. And that is the reason why Drupal incorporated various accessibility features with time to ensure its website accessibility.
Along with accessibility features, Drupal has accessibility modules as well that are contributed by its active community. The modules improve Drupal website accessibility without having to put much effort into coding.
Let’s know which are those modules that enhance Drupal website accessibility.
Top Drupal web accessibility modules! #1 All in One AccessibilityDrupal All in One Accessibility is an AI based accessibility module to enable Drupal websites to be accessible among people with hearing or vision impairments, motor impaired, color blind, dyslexia, cognitive & learning impairments, seizure and epileptic, and ADHD problems. It manages website UI and design related alteration as an accessibility interface.
Drupal All in One Accessibility module installs in just 2 minutes. PRO version reduces the risk of time-consuming accessibility lawsuits.
This module improves accessibility compliance for the standards WCAG 2.0, WCAG 2.1, WCAG 2.2, ADA, Section 508, European EAA EN 301 549, Canada ACA, California Unruh, Israeli Standard 5568, Australian DDA, UK Equality Act, Ontario AODA, France RGAA, German BITV, Brazilian Inclusion law LBI 13.146/2015, Spain UNE 139803:2012, JIS X 8341, Italian Stanca Act, and Switzerland DDA.
It is a cornerstone of improving web accessibility through its ease of use for companies of all sizes. Top features of the module:
- Accessibility statement
- Accessibility interface for UI design fixes
- Dashboard Automatic accessibility score
- AI based Image Alternative Text remediation
- AI based Text to Speech Screen Reader
- Keyboard navigation adjustments
- Content, Color, Contrast, and Orientation Adjustments
- Supports 53 languages
- PDF / Document Remediation Add-On
- White Label Subscription
- Live site translation add-on
- Custom widget color, position, icon size, and type
- Dedicated email support
Monsido tool helps to optimize Drupal websites easily and swiftly. The tool ensures that the website is validated for the de facto international standard, which is WCAG 2.1. So that website will be accessible to everyone in each region.
Monsido scans your Drupal website to identify all persisting accessibility issues and gives you suggestions on addressing the issues to rectify them. It also finds SEO errors and helps you optimize every page of your website.
#3 Editoria11y Accessibility CheckerEditoria11y (editorial accessibility ally) is supported by Princeton University. It is made focusing on content quality and accessibility.
The module checks content automatically, authors are not required to get trained to use it. It detects issues that appear after Drupal assembles the pages by testing rendered content.
Editoria11y prioritizes content issues by inserting alerts and tooltips to help authors fix the problems without troubling them with complex code. It majorly supplements the accessibility issues and does not replace the elements.
#4 Civic Accessibility ToolbarThe Civic Accessibility Toolbar has a block with accessibility utilities which is an aid for end-users if they wish to switch between theme versions with higher color contrast and update text font sizes as well.
The module enables its users to create a block with both or at least one of the utilities to make your Drupal website accessible for visually impaired users. It is tested with Garland, Bartik, Zen Starterkit, Stark, and Olivero themes.
It uses colourContrast and fontSize cookies to remember user selection. The cookies only use functional or necessary details and don’t keep the user’s personal information.
#5 Accessibility toolkitBasically, Accessibility Toolkit helps Drupal developers with reusable tools so that they can fulfil the requirements of people with disabilities by making websites compatible with assistive technologies. It is tested for Drupal 7, 8, and 9. It does this through aggressive CSS additions and remembers the setting using Drupal's built-in usage of jQuery Cookie.
It provides a block with all little settings to allow for –
- High contrast mode
- Dyslexic font support
- Text scaling
- Inverted colors mode
- Keyboard navigation (only for D8/D9)
The module is maintained by Ukrainian developers. It helps users to modify a web page’s line height, font size and style, contrast, and link style. All changes are retained using cookies for a longer span. Fluidproject UI options integrate Drupal libraries into non-admin pages.
To use this module, you need to have Grunt and NPM installed for compiling the infusion library, and a jQuery 1.7 version is required.
However, the module cannot do internationalization through the Drupal interface, JSON files within the module folder can perform this function. This Drupal accessibility module is tested with its most popular themes successfully. Please note here that some of its themes require additional CSS to adjust font size and line heights. Also, Contrast settings don’t work properly for website elements that use CSS gradients.
YOU MAY ALSO LIKE: PDF Document Accessibility Remediation
#7 High contrastHigh contrast provides a quick solution for users to switch between an active theme and its high-contrast version.
It only needs to install it and press the tab from the keyboard, then click on the ‘Toggle high contrast’ link. You will find yourself in high contrast mode, returning to normal view is possible via following the same steps.
#8 Style SwitcherThis Drupal website accessibility module enables every website visitor to select the stylesheet they want to view the site content with. They only require clicking on its link to get the new look of the website.
Style Switcher reduces the duplication of work since developers don’t need to create themes for alternative stylesheets. Themer has the capacity to provide a theme with alternate stylesheets and the Site builder can add alternate stylesheets in the admin section.
The module gathers and presents all the styles as a list of links in a block for site visitors. Thus, all visitors can easily choose their preferred styles. And the module uses cookies, so, if a user returns to the site, they get the same chosen style.
#9 Text ResizeThe text resize accessibility module offers a block to end-users that helps in changing the font size of text on Drupal websites. The block includes a button to increase or decrease the text size, which is an aid for visually impaired users. Text resize uses JavaScript with jQuery and jQuery Cookie to bring accessibility.
#10 Automatic Alternative TextThe Automatic Alternative Text accessibility module uses the Microsoft Azure Cognitive Services API or Alttext.ai to generate alternative texts for images if the alt text is missing.
The module provides algorithms to process images. It can be used to understand if the image has relevant content or not. It also has features like categorizing the content of images, describing the images in human-readable language, and estimating the dominant and accent colors of the image.
P.S. All above-mentioned modules have free and premium versions available. You can select the best suited version.
YOU MAY ALSO LIKE: Voluntary Product Assessment Template (VPAT)
Some more contributed modules to fine-tune the Drupal website’s accessibility!- CKEditor Abbreviation
- HTML Purifier
- Siteimprove
- htmLawed
- Block ARIA Landmark Roles
Read more for detail information.
Wrapping upHaving an accessible website is crucial and the need of an hour. All in One Accessibility is a quick and comprehensive solution with AI based features to improve your website accessibility compliance at next level. The cherry on top is its 2 minutes installation and 10 days free trial. Not limited to this, the dashboard add-ons and upgrades like PDF / document accessibility remediation, white label subscription, and live site translation helps in increasing digital accessibility.
PyCon: PyCon US Hatchery is Back in 2024!
We are pleased to announce the return of the Hatchery program in PyCon US 2024.
What is the Hatchery program?This program offers the pathways for PyCon US attendees to introduce new tracks, activities, summits, demos, etc., at PyCon US—activities that all share and fulfill the Python Software Foundation’s mission within the PyCon US schedule.
The program began as a trial led by Ee Durbin and Naomi Ceder in 2018, resulting in the creation of several new tracks that are now staples of PyCon US, for example: PyCon US Charlas, Mentored Sprints, and Maintainer’s Summit.
The Hatchery program was paused during the pandemic, and we are excited to restart and refresh this program for PyCon US 2024.
With the Hatchery program, we want to provide the opportunity for you, the Python community members, to take active participation and lead new activities and events at PyCon US. We want to provide a transparent process for this, and we also want to ensure that every attendee, whether they are new to the community, or have been at the conference for the 10th time, have the equal opportunity to propose ideas for PyCon US.
What belongs in a Hatchery programPyCon US offers a wide range of events for the attendees to engage with the community during the conference. In addition to the talks, Charlas, keynotes, tutorials, posters, and lightning talks, at PyCon US we further support the Python community by hosting summits (e.g. The Python Language Summit, Education Summit), Sprints, and Open Spaces: one-hour meet ups in dedicated rooms throughout the conference. PyCon US also hosts other events like the PyLadies Luncheon, Members Lunch, and PyLadies Auction. We also offer community booths and Startup Row alongside the Sponsor Booths in the expo hall.
Despite all of the above, as conference organizers we still receive great, creative suggestions and ideas from the community for more things that they’d like to see at PyCon US. This is where the Hatchery program comes in.
If you have an idea for new and different kinds of events, activities, summits, or tracks at PyCon US, or things that do not fit in any of the existing talks, charlas, tutorials, and posters tracks, please propose that as a Hatchery.
A few examples of topics that have been accepted for the program (not a complete list, creativity is expressly encouraged):
- Maintainers' Summit
- The Art of Python
- Foreign language talk track (see the PyCon US Charlas, the first track created in the hatchery, and now a part of PyCon US).
- Scientific data summit
- Mentored Sprints
See the full guidelines and criteria for proposals at https://us.pycon.org/2024/events/hatchery/
Hatchery Program Rebooted!Since it’s been a few years since the last time the Hatchery program ran, meaning, some folks might have already forgotten about it, and newer community members might not yet know what it’s about. With that in mind we decided that this is a good time for us to introduce some changes to the Hatchery program.
One-off events are encouraged!Previous versions of the Hatchery prioritized programs that have potential to become a new staple and repeat program at PyCon US, and one-off events were given a lower priority. We recognize that this might put a high barrier of entry, and could potentially cause organizer burnout. People might feel like they are now “obligated” to continue the program year after year even if they no longer have the bandwidth to do it. Therefore, we want to focus on the experimental aspect of Hatchery. If your Hatchery program is accepted this year, you are free to continue it again next year. But if you just want to experiment and host an activity only for this year, that’s okay too!
Rolling admissionYou can propose your idea for the Hatchery starting today, until approximately 4 weeks before the conference.
We will be reviewing your ideas and proposals as they come in, and we will do our best to support and accommodate your request. We aim to give you a decision within 2 weeks, depending on the nature of your proposal. (i.e. if your proposal requires a more complicated room set up, we may need extra time to figure out the feasibility for that).
Proposal submissions for the Hatchery Program are open until April 17, 2024, or until all the spaces have been filled.
Note that the conference venue is limited in terms of size and available rooms. We might not be able to provide you with a room you need for your program if you wait until the last minute to submit your idea. Therefore, we encourage you to submit your ideas as soon as you can!
How to submit your Hatchery proposal?Please visit the PyCon US 2024 Hatchery page. Submissions are open through the PyCon US 2024 Hatchery CFP on Pretalx here. Please note that Pretalx accounts used for the main conference PyCon US 2024 CFP will not be carried over; all submitters will be required to create a new account for the PyCon US 2024 Hatchery CFP.
Thank you. If you have any questions about the Hatchery, please get in touch with us, pycon-hatchery@python.org. The PyCon US Hatchery Committee members are: Elaine Wong, Mariatta Wijaya, and Naomi Ceder.
Nonprofit Drupal posts: January Drupal for Nonprofits Chat: Return of the Nonprofit Summit!
Join us TOMORROW, January 18 at 1pm ET / 10am PT, for our regularly scheduled call to chat about all things Drupal and nonprofits. (Convert to your local time zone.)
This month we'll be discussing the return of the Nonprofit Summit to DrupalCon Portland 2024! We're currently looking for breakout discussion leaders, and we'll be answering questions about what that involves, as well as throwing around ideas for potential topics.
And we'll of course also have time to discuss anything else that's on our minds at the intersection of Drupal and nonprofits -- including our plans for NTC in March. Got something specific you want to talk about? Feel free to share ahead of time in our collaborative Google doc: https://nten.org/drupal/notes!
All nonprofit Drupal devs and users, regardless of experience level, are always welcome on this call.
This free call is sponsored by NTEN.org and open to everyone.
-
Join the call: https://us02web.zoom.us/j/81817469653
-
Meeting ID: 818 1746 9653
Passcode: 551681 -
One tap mobile:
+16699006833,,81817469653# US (San Jose)
+13462487799,,81817469653# US (Houston) -
Dial by your location:
+1 669 900 6833 US (San Jose)
+1 346 248 7799 US (Houston)
+1 253 215 8782 US (Tacoma)
+1 929 205 6099 US (New York)
+1 301 715 8592 US (Washington DC)
+1 312 626 6799 US (Chicago) -
Find your local number: https://us02web.zoom.us/u/kpV1o65N
-
- Follow along on Google Docs: https://nten.org/drupal/notes
Real Python: Using Python for Data Analysis
Data analysis is a broad term that covers a wide range of techniques that enable you to reveal any insights and relationships that may exist within raw data. As you might expect, Python lends itself readily to data analysis. Once Python has analyzed your data, you can then use your findings to make good business decisions, improve procedures, and even make informed predictions based on what you’ve discovered.
In this tutorial, you’ll:
- Understand the need for a sound data analysis workflow
- Understand the different stages of a data analysis workflow
- Learn how you can use Python for data analysis
Before you start, you should familiarize yourself with Jupyter Notebook, a popular tool for data analysis. Alternatively, JupyterLab will give you an enhanced notebook experience. You might also like to learn how a pandas DataFrame stores its data. Knowing the difference between a DataFrame and a pandas Series will also prove useful.
Get Your Code: Click here to download the free data files and sample code for your mission into data analysis with Python.
In this tutorial, you’ll use a file named james_bond_data.csv. This is a doctored version of the free James Bond Movie Dataset. The james_bond_data.csv file contains a subset of the original data with some of the records altered to make them suitable for this tutorial. You’ll find it in the downloadable materials. Once you have your data file, you’re ready to begin your first mission into data analysis.
Understanding the Need for a Data Analysis WorkflowData analysis is a very popular field and can involve performing many different tasks of varying complexity. Which specific analysis steps you perform will depend on which dataset you’re analyzing and what information you hope to glean. To overcome these scope and complexity issues, you need to take a strategic approach when performing your analysis. This is where a data analysis workflow can help you.
A data analysis workflow is a process that provides a set of steps for your analysis team to follow when analyzing data. The implementation of each of these steps will vary depending on the nature of your analysis, but following an agreed-upon workflow allows everyone involved to know what needs to happen and to see how the project is progressing.
Using a workflow also helps futureproof your analysis methodology. By following the defined set of steps, your efforts become systematic, which minimizes the possibility that you’ll make mistakes or miss something. Furthermore, when you carefully document your work, you can reapply your procedures against future data as it becomes available. Data analysis workflows therefore also provide repeatability and scalability.
There’s no single data workflow process that suits every analysis, nor is there universal terminology for the procedures used within it. To provide a structure for the rest of this tutorial, the diagram below illustrates the stages that you’ll commonly find in most workflows:
A Data Analysis WorkflowThe solid arrows show the standard data analysis workflow that you’ll work through to learn what happens at each stage. The dashed arrows indicate where you may need to carry out some of the individual steps several times depending upon the success of your analysis. Indeed, you may even have to repeat the entire process should your first analysis reveal something interesting that demands further attention.
Now that you have an understanding of the need for a data analysis workflow, you’ll work through its steps and perform an analysis of movie data. The movies that you’ll analyze all relate to the British secret agent Bond … James Bond.
Setting Your ObjectivesThe very first workflow step in data analysis is to carefully but clearly define your objectives. It’s vitally important for you and your analysis team to be clear on what exactly you’re all trying to achieve. This step doesn’t involve any programming but is every bit as important because, without an understanding of where you want to go, you’re unlikely to ever get there.
The objectives of your data analysis will vary depending on what you’re analyzing. Your team leader may want to know why a new product hasn’t sold, or perhaps your government wants information about a clinical test of a new medical drug. You may even be asked to make investment recommendations based on the past results of a particular financial instrument. Regardless, you must still be clear on your objectives. These define your scope.
In this tutorial, you’ll gain experience in data analysis by having some fun with the James Bond movie dataset mentioned earlier. What are your objectives? Now pay attention, 007:
- Is there any relationship between the Rotten Tomatoes ratings and those from IMDb?
- Are there any insights to be gleaned from analyzing the lengths of the movies?
- Is there a relationship between the number of enemies James Bond has killed and the user ratings of the movie in which they were killed?
Now that you’ve been briefed on your mission, it’s time to get out into the field and see what intelligence you can uncover.
Acquiring Your DataOnce you’ve established your objectives, your next step is to think about what data you’ll need to achieve them. Hopefully, this data will be readily available, but you may have to work hard to get it. You may need to extract it from the data storage systems within an organization or collect survey data. Regardless, you’ll somehow need to get the data.
In this case, you’re in luck. When your bosses briefed you on your objectives, they also gave you the data in the james_bond_data.csv file. You must now spend some time becoming familiar with what you have in front of you. During the briefing, you made some notes on the content of this file:
Heading Meaning Release The release date of the movie Movie The title of the movie Bond The actor playing the title role Bond_Car_MFG The manufacturer of James Bond’s car US_Gross The movie’s gross US earnings World_Gross The movie’s gross worldwide earnings Budget ($ 000s) The movie’s budget, in thousands of US dollars Film_Length The running time of the movie Avg_User_IMDB The average user rating from IMDb Avg_User_Rtn_Tom The average user rating from Rotten Tomatoes Martinis The number of martinis that Bond drank in the movieAs you can see, you have quite a variety of data. You won’t need all of it to meet your objectives, but you can think more about this later. For now, you’ll concentrate on getting the data out of the file and into Python for cleansing and analysis.
Read the full article at https://realpython.com/python-for-data-analysis/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Colin Watson: Task management
Now that I’m freelancing, I need to actually track my time, which is something I’ve had the luxury of not having to do before. That meant something of a rethink of the way I’ve been keeping track of my to-do list. Up to now that was a combination of things like the bug lists for the projects I’m working on at the moment, whatever task tracking system Canonical was using at the moment (Jira when I left), and a giant flat text file in which I recorded logbook-style notes of what I’d done each day plus a few extra notes at the bottom to remind myself of particularly urgent tasks. I could have started manually adding times to each logbook entry, but ugh, let’s not.
In general, I had the following goals (which were a bit reminiscent of my address book):
- free software throughout
- storage under my control
- ability to annotate tasks with URLs (especially bugs and merge requests)
- lightweight time tracking (I’m OK with having to explicitly tell it when I start and stop tasks)
- ability to drive everything from the command line
- decent filtering so I don’t have to look at my entire to-do list all the time
- ability to easily generate billing information for multiple clients
- optionally, integration with Android (mainly so I can tick off personal tasks like “change bedroom lightbulb” or whatever that don’t involve being near a computer)
I didn’t do an elaborate evaluation of multiple options, because I’m not trying to come up with the best possible solution for a client here. Also, there are a bazillion to-do list trackers out there and if I tried to evaluate them all I’d never do anything else. I just wanted something that works well enough for me.
Since it came up on Mastodon: a bunch of people swear by Org mode, which I know can do at least some of this sort of thing. However, I don’t use Emacs and don’t plan to use Emacs. nvim-orgmode does have some support for time tracking, but when I’ve tried vim-based versions of Org mode in the past I’ve found they haven’t really fitted my brain very well.
Taskwarrior and TimewarriorOne of the other Freexian collaborators mentioned Taskwarrior and Timewarrior, so I had a look at those.
The basic idea of Taskwarrior is that you have a task command that tracks each task as a blob of JSON and provides subcommands to let you add, modify, and remove tasks with a minimum of friction. task add adds a task, and you can add metadata like project:Personal (I always make sure every task has a project, for ease of filtering). Just running task shows you a task list sorted by Taskwarrior’s idea of urgency, with an ID for each task, and there are various other reports with different filtering and verbosity. task <id> annotate lets you attach more information to a task. task <id> done marks it as done. So far so good, so a redacted version of my to-do list looks like this:
$ task ls ID A Project Tags Description 17 Freexian Add Incus support to autopkgtest [2] 7 Columbiform Figure out Lloyds online banking [1] 2 Debian Fix troffcvt for groff 1.23.0 [1] 11 Personal Replace living room curtain railOnce I got comfortable with it, this was already a big improvement. I haven’t bothered to learn all the filtering gadgets yet, but it was easy enough to see that I could do something like task all project:Personal and it’d show me both pending and completed tasks in that project, and that all the data was stored in ~/.task - though I have to say that there are enough reporting bells and whistles that I haven’t needed to poke around manually. In combination with the regular backups that I do anyway (you do too, right?), this gave me enough confidence to abandon my previous text-file logbook approach.
Next was time tracking. Timewarrior integrates with Taskwarrior, albeit in an only semi-packaged way, and it was easy enough to set that up. Now I can do:
$ task 25 start Starting task 00a9516f 'Write blog post about task tracking'. Started 1 task. Note: '"Write blog post about task tracking"' is a new tag. Tracking Columbiform "Write blog post about task tracking" Started 2024-01-10T11:28:38 Current 38 Total 0:00:00 You have more urgent tasks. Project 'Columbiform' is 25% complete (3 of 4 tasks remaining).When I stop work on something, I do task active to find the ID, then task <id> stop. Timewarrior does the tedious stopwatch business for me, and I can manually enter times if I forget to start/stop a task. Then the really useful bit: I can do something like timew summary :month <name-of-client> and it tells me how much to bill that client for this month. Perfect.
I also started using VIT to simplify the day-to-day flow a little, which means I’m normally just using one or two keystrokes rather than typing longer commands. That isn’t really necessary from my point of view, but it does save some time.
Android integrationI left Android integration for a bit later since it wasn’t essential. When I got round to it, I have to say that it felt a bit clumsy, but it did eventually work.
The first step was to set up a taskserver. Most of the setup procedure was OK, but I wanted to use Let’s Encrypt to minimize the amount of messing around with CAs I had to do. Getting this to work involved hitting things with sticks a bit, and there’s still a local CA involved for client certificates. What I ended up with was a certbot setup with the webroot authenticator and a custom deploy hook as follows (with cert_name replaced by a DNS name in my house domain):
#! /bin/sh set -eu cert_name=taskd.example.org found=false for domain in $RENEWED_DOMAINS; do case "$domain" in $cert_name) found=: ;; esac done $found || exit 0 install -m 644 "/etc/letsencrypt/live/$cert_name/fullchain.pem" \ /var/lib/taskd/pki/fullchain.pem install -m 640 -g Debian-taskd "/etc/letsencrypt/live/$cert_name/privkey.pem" \ /var/lib/taskd/pki/privkey.pem systemctl restart taskd.serviceI could then set this in /etc/taskd/config (server.crl.pem and ca.cert.pem were generated using the documented taskserver setup procedure):
server.key=/var/lib/taskd/pki/privkey.pem server.cert=/var/lib/taskd/pki/fullchain.pem server.crl=/var/lib/taskd/pki/server.crl.pem ca.cert=/var/lib/taskd/pki/ca.cert.pemThen I could set taskd.ca on my laptop to /usr/share/ca-certificates/mozilla/ISRG_Root_X1.crt and otherwise follow the client setup instructions, run task sync init to get things started, and then task sync every so often to sync changes between my laptop and the taskserver.
I used TaskWarrior Mobile as the client. I have to say I wouldn’t want to use that client as my primary task tracking interface: the setup procedure is clunky even beyond the necessity of copying a client certificate around, it expects you to give it a .taskrc rather than having a proper settings interface for that, and it only seems to let you add a task if you specify a due date for it. It also lacks Timewarrior integration, so I can only really use it when I don’t care about time tracking, e.g. personal tasks. But that’s really all I need, so it meets my minimum requirements.
Next?Considering this is literally the first thing I tried, I have to say I’m pretty happy with it. There are a bunch of optional extras I haven’t tried yet, but in general it kind of has the vim nature for me: if I need something it’s very likely to exist or easy enough to build, but the features I don’t use don’t get in my way.
I wouldn’t recommend any of this to somebody who didn’t already spend most of their time in a terminal - but I do. I’m glad people have gone to all the effort to build this so I didn’t have to.
Tag1 Consulting: Exploring Drupal’s Sustainability Project, Gander's Ability to Help, and How You Can Too.
Discover what sustainability really means in tech in our latest Tag1 Team Talks episode. Learn how the Drupal community contributes to this vital cause and how you can get involved
Read more michaelemeyers Wed, 01/17/2024 - 05:00Ruqola 2.1 Beta
Ruqola 2.1 Beta (2.0.81) is available for packaging and testing.
Ruqola is a chat app for Rocket.chat. This beta release will build with the current release candidate of KDE Frameworks 6 and KTextAddons allowing distros to start to move away from Qt 5.
URL: https://download.kde.org/unstable/ruqola/
SHA256: 2c4135c08acc31f846561b488aa24f1558d7533b502f9ba305be579d43f81b73
Signed by E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Esk-Riddell jr@jriddell.org
https://jriddell.org/esk-riddell.gpg
LN Webworks: Drupal Recipes: All You Need to Know
Building a Drupal website from scratch can be challenging and time-consuming. That’s exactly why we need Drupal recipes. These are a set of predefined configurations or components that can be used as the starting point for addressing specific needs such as creating an e-commerce platform, blog, and other projects. It doesn’t matter what type of Drupal development services you are interested in, Drupal recipes are available for all. They make Drupal project development much easier and faster.
www @ Savannah: The Moral and the Legal
New article by Richard Stallman: https://www.gnu.org/philosophy/the-moral-and-the-legal.html
Python Engineering at Microsoft: Join us for AI Chat App Hack from Jan. 29 – Feb.12
Over the past six months, we’ve met hundreds of developers that are using Python to build AI chat apps for their own knowledge domains, using the RAG (Retrieval Augmented Generation) approach to send chunks of knowledge to an LLM model along with the user question.
We’ve also heard from many developers that they’d like to learn how to build their own RAG chat apps, but they don’t know where to start. So we’re hosting a virtual hackathon to help you learn how to build your own RAG chat app with Python!
.
From January 29th to February 12th, we’ll host live streams showing you how to build on our most popular RAG chat sample repository, while also explaining the core concepts underlying all modern RAG chat apps. Live stream topics will include vector search, access control, GPT-4 with vision. We’re hoping to get developers from all over the world involved, so we’ll also have live streams in Spanish, Portuguese, and Chinese. There will be prizes for the best chat apps, and even a prize for our most helpful community member.
To learn more, visit the AI Chat App Hack page, and follow the steps there to register and meet the community. Hope to see you there!
More RAG resources for Python developersIf you’re interested in learning more about RAG chat apps but can’t join the hack, here are some resources to get you started:
- Tutorial: Get started with the Python enterprise chat sample using RAG
- GitHub Universe: Quickly build and deploy OpenAI apps on Azure, infused with your own data
- Azure AI resources for Python developers
- Using Llamaindex with Azure AI Search
- AI Discord community
The post Join us for AI Chat App Hack from Jan. 29 – Feb.12 appeared first on Python.
Python⇒Speed: Beware of misleading GPU vs CPU benchmarks
Do you use NumPy, Pandas, or scikit-learn and want to get faster results? Nvidia has created GPU-based replacements for each of these with the shared promise of extra speed.
For example, if you visit the front page of NVidia’s RAPIDS project, you’ll see benchmarks showing cuDF, a GPU-based Pandas replacement, is 15× to 80× faster than Pandas!
Unfortunately, while those speed-ups are impressive, they are also misleading. GPU-based libraries might be the answer to your performance problems… or they might be an an unnecessary and expensive distraction.
Read more...Seth Michael Larson: Defending against the PyTorch supply chain attack PoC
Published 2024-01-17 by Seth Larson
Reading time: minutes
Last week there which a publication into a proof-of-concept supply chain attack against PyTorch using persistence in self-hosted GitHub runners, capturing tokens from triggerable jobs as a third-party contributor, and modifying workflows. This report was #1 on Hacker News for most of Sunday. In the comments of this publication there was a lot of discussion and folks questioning "how do you defend from this type of attack"?
Luckily for open source users, there are already techniques that can be used today to mitigate the downstream impact of a compromised dependency:
- Using a lock file with pinned hashes like pip with --require-hashes, poetry.lock, or Pipfile.lock.
- Reviewing diffs between currently pinned and new candidate releases. The diff must be of the installed artifacts, not using git tags or source repository information. Tools like diffoscope are useful for diffing wheel files which are actually zip files in disguise.
- For larger organizations the cost of manual review can be amortized by mirroring PyPI and only updating dependencies that have been manually reviewed.
- Binary or compiled dependencies can be compiled from source to ensure malicious code isn't hidden from human inspection.
These are tried-and-true methods to protect yourself and ensure dependencies aren't compromised regardless of what happens upstream. Obviously the suggestions above take time and effort to implement. Generally there's desire from me and others to make the above steps easier for consumers like exposing build provenance for easier reviewing of source code or by improving the overall safety of PyPI content using malware scanning and reporting.
Part of my plans for 2024 is to create guidance for Python open source consumers and maintainers for how to safely use packaging tools both from the perspective of supply chain integrity but also for vulnerabilities, builds, etc. So stay tuned for that!
CPython Software Bill-of-Materials updateLast week I published a draft for CPython's SBOM document specifically for the source tarballs in order to solicit feedback from consumers of SBOMs and developers of SBOM tooling. I received great feedback from Adolfo Garcia Veytia and Ritesh Noronha including the following points:
- Strip version information from the fileName attribute
- The top-level CPython component had no relationships to non-file components, should have DEPENDS_ON relationships to all its dependent packages.
- Fix the formatting of the "Tool: " name and version. Correct format is {name}-{version}.
- Use the fileName attribute on the CPython package instead of using a separate file component for the tarball containing CPython source code.
- Include an email address for all "Person" identities.
- Guidance on alternatives to the documentNamespace field.
After applying this feedback we now have an SBOM which meets NTIA's Minimum Elements of an SBOM and scores 9.6 out of 10 for the SBOM Quality Score.
Next I'm working on the infrastructure for actually generating and making the SBOM available for consumers:
- Created a PR for generating the draft SBOM. Next need to hook into the actual release process and opportunistically generate an SBOM.
- Applied changes according to feedback from SBOM reviewers.
- Created a PR for enabling hosting an SBOM artifact on https://python.org/downloads
- Reviewed PEP 740 proposal for arbitrary attestation mechanism for PyPI artifacts.
- Triaged multiple reports to the Python Security Response Team.
That's all for this week! 👋 If you're interested in more you can read last week's report.
Thanks for reading! ♡ Did you find this article helpful and want more content like it? Get notified of new posts by subscribing to the RSS feed or the email newsletter.
This work is licensed under CC BY-SA 4.0