Feeds
The Drop Times: “It’s Time to Give Back Beyond Code”: Kevin Quillen
Python Bytes: #397 So many PyCon videos
CKEditor: DrupalCon 2024 - A chat with Simon Morvan
Dirk Eddelbuettel: digest 0.6.37 on CRAN: Maintenance
Release 0.6.37 of the digest package arrived at CRAN today and has also been uploaded to Debian.
digest creates hash digests of arbitrary R objects. It can use a number different hashing algorithms (md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, spookyhash, blake3,crc32c, xxh3_64 and xxh3_128), and enables easy comparison of (potentially large and nested) R language objects as it relies on the native serialization in R. It is a mature and widely-used package (with 70.8 million downloads just on the partial cloud mirrors of CRAN which keep logs) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation to quickly identify the various objects.
This release updates one of the different hashing source functions which, to remain close to their upstream, used Free() and Calloc() (uppercased to use the R allocator) but not the prefixed stricter versions R_Free() and R_Calloc(). R will switch to enforcing these in the next release next year. Kevin had noticed (while doing some other testing) that this now fails under R-devel (with a switch set), and prepares a very nice and clean PR to take care of it. As of today, CRAN is now sending ‘please fix, or else …’ notes so it was a good time to send this to CRAN. We also updated some remaining http URLs in the README.md to https, and switched to Author/Maintainer field to the now also mandatory Authors@R.
My CRANberries provides a summary of changes to the previous version. For questions or comments use the issue tracker off the GitHub repo. For documentation (including the changelog) see the documentation site.
If you like this or other open-source work I do, you can now sponsor me at GitHub.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
TestDriven.io: Limiting Content Types in a Django Model
Message-passing APIs (SIMPL)
In the KDE world, famously there was that weekend where DCOP (Desktop Communicating Objects Protocol) was created, setting the stage for things like KParts. GNOME picked the CORBA object model, and much later the Free Desktop world settled on DBus as a message-passing API. But even at the time, there were other message-passing APIs. At work-work I use one, called SIMPL, which is kind of shout-out to the late ’90s of DCOP.
Please note that my presentation of “history” is just what I remember now of events that were already “tales told ‘round the campfire” 15 years ago. Corrections welcome (by email).
At work-work SIMPL is just a given, and it’s got wrappers and abstractions so that there’s a decent C++ style API around it. At its heart it is a point-to-point message-passing API with no policy at all about the payload of messages. That is both a blessing – no policy means it can be used for whatever kind of messages you think is necessary – and a curse – no policy means that you end up writing a bunch of abstractions to represent the messages that you actually use.
Code at the wrapped-in-C++ level looks something like this (effectively obscuring the underlying transport):
auto reply = CoordinatorTask().Send(requests::GetCurrentUser()); if(reply.has_value()) { do_something(reply.value().username); ...There does not seem to be much online about SIMPL anymore. There is a LinuxDevices article from 2000 when SIMPL was under some active development as an Open Source project, there’s a LWN comment from 2015 that mentions it still, and there’s a Wikipedia article on it, as a historical note. Note that the Website link on wikipedia goes to something that is now a spam domain. I can’t quickly find sources anymore.
I do wonder at the chances of history that made desktop developers at the time entirely miss out on this existing message-passing mechanism – perhaps the SIMPL authors were too focused on industrial automation and not visible in the X11 desktop space, and not visible on SunOS and other platforms where a fair bit of KDE development happened at the time.
One thing that the SIMPL library developers emphasize repeatedly in documentation (there’s a book) is a philosophy of doing one thing and doing it well. So serialization and message payloads are not part of this – no policy. So security and access control are not part of his – no policy. Those concerns are things that can go into a different API layer on top of SIMPL.
Something else that the SIMPL authors emphasize is doing one thing and doing it well. That applies to the applications that use SIMPL, in particular. And that translates, in the depths of the library, to being able to register only one name for an application (for purposes of discovering what other applications there are, each application has a unique name in the system). It translates into a lack of thread safety, due to the use of global state. It translates into a contextless API, so all an application can do is call “give me the next incoming message”. There’s no concept of an event-loop, and no obvious mechanisms for integrating SIMPL message-handling into another event loop (like an X11 loop).
For me personally this means that I need to re-tool my brain during the trip between home and work, switching of Qt wrappers around DBus to work-work wrappers around SIMPL and the subtleties of each. In practice, that means mostly cursing QtDBus (because of the relative amount of time I put into both).
Matthew Garrett: Client-side filtering of private data is a bad idea
Feeld is a dating app aimed largely at alternative relationship communities (think "classier Fetlife" for the most part), so unsurprisingly it's fairly popular in San Francisco. Their website makes the claim:
Can people see what or who I'm looking for?
No. You're the only person who can see which genders or sexualities you're looking for. Your curiosity and privacy are always protected.
which is based on you being able to restrict searches to people of specific genders, sexualities, or relationship situations. This sort of claim is one of those things that just sits in the back of my head worrying me, so I checked it out.
First step was to grab a copy of the Android APK (there are multiple sites that scrape them from the Play Store) and run it through apk-mitm - Android apps by default don't trust any additional certificates in the device certificate store, and also frequently implement certificate pinning. apk-mitm pulls apart the apk, looks for known http libraries, disables pinning, and sets the appropriate manifest options for the app to trust additional certificates. Then I set up mitmproxy, installed the cert on a test phone, and installed the app. Now I was ready to start.
What became immediately clear was that the app was using graphql to query. What was a little more surprising is that it appears to have been implemented such that there's no server state - when browsing profiles, the client requests a batch of profiles along with a list of profiles that the client has already seen. This has the advantage that the server doesn't need to keep track of a session, but also means that queries just keep getting larger and larger the more you swipe. I'm not a web developer, I have absolutely no idea what the tradeoffs are here, so I point this out as a point of interest rather than anything else.
Anyway. For people unfamiliar with graphql, it's basically a way to query a database and define the set of fields you want returned. Let's take the example of requesting a user's profile. You'd provide the profile ID in question, and request their bio, age, rough distance, status, photos, and other bits of data that the client should show. So far so good. But what happens if we request other data?
graphql supports introspection to request a copy of the database schema, but this feature is optional and was disabled in this case. Could I find this data anywhere else? Pulling apart the apk revealed that it's a React Native app, so effectively a framework for allowing writing of native apps in Javascript. Sometimes you'll be lucky and find the actual Javascript source there, but these days it's more common to find Hermes blobs. Fortunately hermes-dec exists and does a decent job of recovering something that approximates the original input, and from this I was able to find various lists of database fields.
So, remember that original FAQ statement, that your desires would never be shown to anyone else? One of the fields mentioned in the app was "lookingFor", a field that wasn't present in the default profile query. What happens if we perform the incredibly complicated hack of exporting a profile query as a curl statement, add "lookingFor" into the set of requested fields, and run it?
Oops.
So, point 1 is that you can't simply protect data by having your client not ask for it - private data must never be released. But there was a whole separate class of issue that was an even more obvious issue.
Looking more closely at the profile data returned, I noticed that there were fields there that weren't being displayed in the UI. Those included things like "ageRange", the range of ages that the profile owner was interested in, and also whether the profile owner had already "liked" or "disliked" your profile (which means a bunch of the profiles you see may already have turned you down, but the app simply didn't show that). This isn't ideal, but what was more concerning was that profiles that were flagged as hidden were still being sent to the app and then just not displayed to the user. Another example of this is that the app supports associating your profile with profiles belonging to partners - if one of those profiles was then hidden, the app would stop showing the partnership, but was still providing the profile ID in the query response and querying that ID would still show the hidden profile contents.
Reporting this was inconvenient. There was no security contact listed on the website or in the app. I ended up finding Feeld's head of trust and safety on Linkedin, paying for a month of Linkedin Pro, and messaging them that way. I was then directed towards a HackerOne program with a link to terms and conditions that 404ed, and it took a while to convince them I was uninterested in signing up to a program without explicit terms and conditions. Finally I was just asked to email security@, and successfully got in touch. I heard nothing back, but after prompting was told that the issues were fixed - I then looked some more, found another example of the same sort of issue, and eventually that was fixed as well. I've now been informed that work has been done to ensure that this entire class of issue has been dealt with, but I haven't done any significant amount of work to ensure that that's the case.
You can't trust clients. You can't give them information and assume they'll never show it to anyone. You can't put private data in a database with no additional acls and just rely on nobody ever asking for it. You also can't find a single instance of this sort of issue and fix it without verifying that there aren't other examples of the same class. I'm glad that Feeld engaged with me earnestly and fixed these issues, and I really do hope that this has altered their development model such that it's not something that comes up again in future.
(Edit to add: as far as I can tell, pictures tagged as "private" which are only supposed to be visible if there's a match were appropriately protected, and while there is a "location" field that contains latitude and longitude this appears to only return 0 rather than leaking precise location. I also saw no evidence that email addresses, real names, or any billing data was leaked in any way)
comments
FSF Events: Free Software Directory meeting on IRC: Friday, August 23, starting at 12:00 EDT (16:00 UTC)
The Drop Times: A Name of Purpose and Clarity
Real Python: Python Classes: The Power of Object-Oriented Programming
Python supports the object-oriented programming paradigm through classes. They provide an elegant way to define reusable pieces of code that encapsulate data and behavior in a single entity. With classes, you can quickly and intuitively model real-world objects and solve complex problems.
If you’re new to classes, need to refresh your knowledge, or want to dive deeper into them, then this tutorial is for you!
In this tutorial, you’ll learn how to:
- Define Python classes with the class keyword
- Add state to your classes with class and instance attributes
- Provide behavior to your classes with methods
- Use inheritance to build hierarchies of classes
- Provide interfaces with abstract classes
To get the most out of this tutorial, you should know about Python variables, data types, and functions. Some experience with object-oriented programming (OOP) is also a plus. Don’t worry if you’re not an OOP expert yet. In this tutorial, you’ll learn the key concepts that you need to get started and more. You’ll also write several practical examples to help reinforce your knowledge of Python classes.
Get Your Code: Click here to download your free sample code that shows you how to build powerful object blueprints with classes in Python.
Take the Quiz: Test your knowledge with our interactive “Python Classes - The Power of Object-Oriented Programming” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Python Classes - The Power of Object-Oriented ProgrammingIn this quiz, you'll test your understanding of Python classes. With this knowledge, you'll be able to define reusable pieces of code that encapsulate data and behavior in a single entity, model real-world objects, and solve complex problems.
Getting Started With Python ClassesPython is a multiparadigm programming language that supports object-oriented programming (OOP) through classes that you can define with the class keyword. You can think of a class as a piece of code that specifies the data and behavior that represent and model a particular type of object.
What is a class in Python? A common analogy is that a class is like the blueprint for a house. You can use the blueprint to create several houses and even a complete neighborhood. Each concrete house is an object or instance that’s derived from the blueprint.
Each instance can have its own properties, such as color, owner, and interior design. These properties carry what’s commonly known as the object’s state. Instances can also have different behaviors, such as locking the doors and windows, opening the garage door, turning the lights on and off, watering the garden, and more.
In OOP, you commonly use the term attributes to refer to the properties or data associated with a specific object of a given class. In Python, attributes are variables defined inside a class with the purpose of storing all the required data for the class to work.
Similarly, you’ll use the term methods to refer to the different behaviors that objects will show. Methods are functions that you define within a class. These functions typically operate on or with the attributes of the underlying instance or class. Attributes and methods are collectively referred to as members of a class or object.
You can write classes to model the real world. These classes will help you better organize your code and solve complex programming problems.
For example, you can use classes to create objects that emulate people, animals, vehicles, books, buildings, cars, or other objects. You can also model virtual objects, such as a web server, directory tree, chatbot, file manager, and more.
Finally, you can use classes to build class hierarchies. This way, you’ll promote code reuse and remove repetition throughout your codebase.
In this tutorial, you’ll learn a lot about classes and all the cool things that you can do with them. To kick things off, you’ll start by defining your first class in Python. Then you’ll dive into other topics related to instances, attributes, and methods.
Defining a Class in PythonTo define a class, you need to use the class keyword followed by the class name and a colon, just like you’d do for other compound statements in Python. Then you must define the class body, which will start at the next indentation level:
Python Syntax class ClassName: <body> Copied!In a class’s body, you can define attributes and methods as needed. As you already learned, attributes are variables that hold the class data, while methods are functions that provide behavior and typically act on the class data.
Note: In Python, the body of a given class works as a namespace where attributes and methods live. You can only access those attributes and methods through the class or its objects.
As an example of how to define attributes and methods, say that you need a Circle class to model different circles in a drawing application. Initially, your class will have a single attribute to hold the radius. It’ll also have a method to calculate the circle’s area:
Python circle.py import math class Circle: def __init__(self, radius): self.radius = radius def calculate_area(self): return math.pi * self.radius ** 2 Copied!In this code snippet, you define Circle using the class keyword. Inside the class, you write two methods. The .__init__() method has a special meaning in Python classes. This method is known as the object initializer because it defines and sets the initial values for the object’s attributes. You’ll learn more about this method in the Instance Attributes section.
Read the full article at https://realpython.com/python-classes/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Consensus Enterprises: Starshot: Moving Drupal Towards a Product Platform
mandclu: What’s Cooking with the Events Recipe for Drupal CMS
When I first heard the vision for Starshot (now Drupal CMS), I knew exactly how I wanted to contribute. For years I have been working on trying to make it easier to quickly build Drupal sites following established best practices. I had been working on a set of modules I called Configuration Kits, but they were conceptually very similar to Recipes, albeit in a simpler (and less flexible) form.
mandclu Aug 19, 2024 - 8:48am TagsMike Driscoll: How to Plot in the Terminal with Python and Textualize
Have you ever wanted to create a plot or graph in your terminal? Okay, maybe you haven’t, but now that you know you can, you want to! Python has the plotext package for plotting in your terminal. However, while that package is amazing all on its own, there is another package called textual-plotext that wraps the plotext package so you can use it in Textual!
In this tutorial, you will learn the basics of creating a plot in your terminal using the textual-plotext package!
InstallationYour first step in your plotting adventure is to install textual-plotext. You can install the package using pip. Open up your terminal and run the following command:
python -m pip install textual-plotextPip will install textual-plotext and any dependencies it needs. Once that’s done, you’re ready to start plotting!
UsageTo kick things off, you’ll create a simple Textual application with a scatter chart in it. This example comes from the textual-plotext GitHub repo. You can create a Python file and name it something like scatterplot.py and then add the following code:
from textual.app import App, ComposeResult from textual_plotext import PlotextPlot class ScatterApp(App[None]): def compose(self) -> ComposeResult: yield PlotextPlot() def on_mount(self) -> None: plt = self.query_one(PlotextPlot).plt y = plt.sin() # sinusoidal test signal plt.scatter(y) plt.title("Scatter Plot") # to apply a title if __name__ == "__main__": ScatterApp().run()When you run this code, you will see the following in your terminal:
Plotext does more than scatterplots though. You can create any of the following:
- Scatter Plot
- Line Plot
- Log Plot
- Stem Plot
- Bar Plots
- Datetime Plots
- Special Plots
- Decorator Plots
- Image Plots
- and more
Let’s look at a bar plot example:
from textual.app import App, ComposeResult from textual_plotext import PlotextPlot class BarChartApp(App[None]): def compose(self) -> ComposeResult: yield PlotextPlot() def on_mount(self) -> None: languages = ["Python", "C++", "PHP", "Ruby", "Julia", "COBOL"] percentages = [14, 36, 11, 8, 7, 4] plt = self.query_one(PlotextPlot).plt y = plt.bar(languages, percentages) plt.scatter(y) plt.title("Programming Languages") # to apply a title if __name__ == "__main__": BarChartApp().run()The main difference here is that you’ll be calling plt.bar() with some parameters, whereas, in the previous example, you called plt.sin()with no parameters at all. Of course, you also need some data to plot for this second example. The provided data is all made up.
When you run this example, you will see something like the following:
Isn’t that neat?
Wrapping UpYou can create many other plots with Plotext. Check out their documentation and give some of the other plots a whirl. Have fun and make cool things in your terminal today!
The post How to Plot in the Terminal with Python and Textualize appeared first on Mouse Vs Python.
Qt Quick Effect Maker: What's new in Qt 6.8
As the Qt 6.8 Beta 3 was released last week, it is a good time to start talking about what's new in the Qt 6.8 release. This blog post introduces one of those things, the new effect nodes available in Qt Quick Effect Maker. Also included is an example application using all of these effects.
PyCharm: Introducing the PyCharm Databricks Integration
We’re introducing the Databricks integration with PyCharm Professional to make it easier for you to process, store, and analyze your data!
The integration allows you to build your data and AI apps on the Databricks Data Intelligence Platform directly within PyCharm Professional, enhancing the data analytics platform with the powerful Python IDE by JetBrains. It enables you to write code quickly and easily and run it in the cloud without extra configurations, and it offers additional benefits for working with data.
Read this blog post to learn more about the integration, who it will be useful for, and what benefits it offers.
Install the Databricks plugin Watch the plugin in action What is Databricks?The Databricks Data Intelligence Platform allows your entire organization to use data and AI. It’s built on a lakehouse to provide an open, unified foundation for all data and governance, and is powered by a Data Intelligence Engine that understands the uniqueness of your data.
What is PyCharm Professional?PyCharm Professional is a leading IDE for Python and other programming languages. It allows you to write high-quality and efficient code using superior code completion, refactoring capabilities, code inspections, seamless code and project navigation, a debugger, and a wide range of integrations, including Jupyter notebooks, testing frameworks, Git, CI/CD solutions, and more – all available in one place right out of the box.
Who will the integration be useful for?Organizations and data professionals using data lakehouses, data lakes, and data warehouses via Databricks will benefit from this integration.
What benefits does the integration bring?The integration combines the most powerful capabilities of each platform, allowing you to easily build all of your data and AI applications at scale within PyCharm:
- Use PyCharm to implement software development best practices, which are essential for large codebases, such as source code control, modular code layouts, testing, and more.
- Databricks enables the use of powerful clusters, allowing you to work on projects too large for a local machine and helping you orchestrate data processing efficiently.
You can write the code for your pipelines and jobs in PyCharm, then deploy, test, and run it in real time on your Databricks cluster without any additional configurations.
Let’s dive into more details about what the PyCharm Databricks integration provides.
Connect to your cluster via PyCharmYou can connect directly to the Databricks cluster via PyCharm and monitor the process within the IDE. This allows you to check if the cluster is running, see the results of the current session’s runs, and view process outcomes along with additional details.
Run Python scripts on a remote clusterIn addition, you can run Python scripts on a remote cluster, which is particularly useful for working with big data, and view the results in the IDE.
Run Jupyter notebooks or Python scripts as workflowsAdditionally, you can run your notebook or Python scripts as a Databricks workflow and see the output in the console.
You can see the results of the runs on the Databricks platform, including the runs initiated from PyCharm.
Synchronize project files to the Databricks workspaceThe synchronization of project files with the Databricks workspace allows you to access and work with the same files in both PyCharm and Databricks workspaces. You can also schedule your notebooks and scripts and utilize other platform features for projects completed in PyCharm.
How to get startedMake sure you have the following ready to go:
- PyCharm Professional 2024.2 and further editions
- Big Data Tools Core plugin
- Databricks account
You can install the Databricks plugin either from JetBrains Marketplace or directly from within the PyCharm IDE.
Install the Databricks pluginHead over to the documentation to get step-by-step instructions on how to get started and use the plugin.
What do you think about this integration? Share your thoughts in the comments below.
Talk Python to Me: #474: Python Performance for Data Science
EuroPython: EuroPython 2024: Post Conference Feedback
Its been a month now since EuroPython 2024 took place and as the dust settles, we’ve gathered feedback from 157 attendees to understand what made this year’s event special, what challenges were faced, and how the experience can shape future EuroPythons. Whether you were there in person or followed along online, join us as we dive into the analysing the feedback!
The data we have represents around 13% of the onsite attendees and around 11% of total attendees. It is difficult to tell whether this is a representative sample as we did not collect demographic data.
Satisfaction with the conferenceAttendees were overall very satisfied with the conference, with a mean overall satisfaction rating of 4.3. Moreover, attendees were satisfied with most specific aspects of the conference, including the venue (mean = 4.6), food (mean = 4.0), and the social event (mean = 4.0). Attendees particularly liked that the conference was hosted in Prague, with the location getting a mean rating of 4.7.
Overall, the satisfaction ratings that had the strongest relationship (Spearman correlation) with overall satisfaction with the conference were the food (rs = 0.20) and the social event (rs = 0.17); however, these are still very modest correlations, indicating that other factors we did not measure were stronger drivers of satisfaction with the conference
Mean and standard deviation of ratings per conference aspectThings that people liked about the conferenceTwo of the feedback survey questions were free text which asked attendees to comment on what they did and did not like overall about the conference. Ninety-seven respondents gave positive feedback, and 92 gave negative feedback.
In order to make these easier to analyse, used a large language model to extract the topics that each attendee was talking about in their responses. This was by no means perfect, so take it as a guide rather than something totally objective (e.g., “organisation” was pretty broadly interpreted by the model).
The top topics (where at least 5 people had mentioned them in their feedback) in the “liked” feedback are listed below.
Liked topics
Number of respondents who mentioned topic
Community
37
Food
20
Organisation
19
Talks
16
Networking
14
Atmosphere
14
Venue
10
Communication
5
Location
5
We have also plotted these topics by their mentions for better understanding
With a general overview of the feedback on EuroPython 2024 in mind, let&aposs delve into specific aspects of the conference such as the food, talks, workshops, and more
Food & CateringAs noted above in the first graph, people were overall satisfied with the food. When breaking this down by dietary requirements, satisfaction varied a bit more.
Note: we have very small samples for some of these special dietary groups,Dietary requirement
Total respondents
Mean satisfaction rating
None
115
4.13
Vegetarian
23
3.82
Vegan
11
4.00
Gluten-free
2
3.50
Lactose-intolerant
4
4.00
Low-carb
2
3.50
Halal
4
3.00
Kosher
1
5.00
Other
2
2.50
Average Satisfaction with food grouped by Dietary requirementsPyLadies workshopsOnly 28 respondents indicated that they had attended at least one PyLadies workshop, so we should interpret the below findings with caution. However, mean satisfaction with the PyLadies events was high (4.43). Below is the breakdown of mean satisfaction per PyLadies event (some people attended multiple events, hence the total exceeds 28).
PyLadies event
Total respondents
Mean satisfaction rating
Wednesday evening social event
16
4.43
Self-Defense workshop
7
4.29
PyLadies lunch
16
4.38
#IAmRemarkable workshop
1
5.00
Community tutorialsOnly 8 respondents indicated that they had attended at least one of the community tutorials so unfortunately this means we cannot break down the data by tutorial. However, the overall mean satisfaction rating was high (4.0).
TalksAttendees were asked to give feedback on which talks they particularly liked, and which ones they didn&apost like. 104 attendees gave feedback on which talks they liked, and 59 gave feedback on talks they did not like.
We normalised the feedback by matching it up to the closest title, and then checking this manually. The findings are broken down below by talk level, type and track, as well as the most liked speakers.
LevelAttendees seemed to enjoy talks across all levels equally, with around 4 times more people liking versus disliking talks at each level.
Talk level
Number of talks at conference
Number of liked talks
Number of disliked talks
Average number of likes per talk
Average number of dislikes per talk
Ratio of likes to dislikes
beginner
65
138
35
2.1
0.54
3.9
intermediate
93
178
46
1.9
0.49
3.9
advanced
14
23
6
1.6
0.43
3.8
TypeBy far the most popular type of talk were long talk sessions. These talks received a lot of feedback, with 15 times more attendees saying they liked these talks versus disliking them. The keynotes were also well received, with 4 times more people mentioning liking them versus disliking them.
Posters were only mentioned once in the feedback. While it is hard to say from this feedback as it does not measure attendance, it is possible these sessions were not well attended.
Talk type
Number of talks at conference
Number of liked talks
Number of disliked talks
Average number of likes per talk
Average number of dislikes per talk
Ratio of likes to dislikes
Talk (long session)
25.0
61.0
4.0
2.4
0.16
15.2
Keynote
6.0
84.0
20.0
14
3.33
4.2
Talk
92.0
174.0
51.0
1.9
0.55
3.4
Tutorial
16.0
9.0
5.0
0.6
0.31
1.8
Sponsored
8.0
8.0
6.0
1
0.75
1.3
Conference Workshop
1.0
0.25
Panel
2.0
2.0
1
Poster
9.0
1.0
0.1
Track
While it is hard to draw robust conclusions for this category as there are very small samples in most of the tracks, some particularly popular tracks (with a high ratio of likes to dislikes and high number of overall likes) are:
- Arts, Crafts Culture & Demos
- Testing and QA
- Career, Life, Health
- Python Libraries and Tooling
- Python Internals & Ecosystem
track
Number of talks at conference
Number of liked talks
Number of disliked talks
Average number of likes per talk
Average number of dislikes per talk
Ratio of likes to dislikes
Arts, Crafts Culture & Demos
3
30
1.0
10.0
0.33
30.0
Testing and QA
5
17
1.0
3.4
0.2
17.0
Career, Life, Health
4
28
2.0
7.0
0.5
14.0
PyData: Deep Learning, NLP, CV
8
7
1.0
0.88
0.13
7.0
DevOps and Infrastructure (Cloud & Hardware)
5
13
2.0
2.6
0.4
6.5
Python Libraries & Tooling
23
49
10.0
2.13
0.43
4.9
Web technologies
9
4
1.0
0.44
0.11
4.0
Python Internals & Ecosystem
24
62
18.0
2.58
0.75
3.4
Education, Community & Diversity
7
19
7.0
2.71
1.0
2.7
PyData: LLMs
10
15
6.0
1.5
0.6
2.5
PyData: Data Engineering
10
9
4.0
0.9
0.4
2.3
Software Engineering & Architecture
14
31
14.0
2.21
1.0
2.2
Security
6
4
3.0
0.67
0.5
1.3
PyData: Machine Learning, Stats
7
3
3.0
0.43
0.43
1.0
~ None of these topics
3
1
1.0
0.33
0.33
1.0
PyData: Research & Applications
6
1
2.0
0.17
0.33
0.5
Ethics, Philosophy & Politics
1
1
1.0
PyData: Software Packages & Jupyter
5
7
1.4
Special thank you to our amazing data wizard Jodie Burchell for putting together together this report!
If you have any questions, you are welcome to reach out to the team at helpdesk@europython.eu
Gunnar Wolf: The social media my blog –as well as some other sites I publish in– is pushed to will soon stop receiving updates
For many years, I have been using the dlvr.it service to echo my online activity to where more people can follow it. Namely, I write in the following sources:
- My blog (where this content is being posted to) → RSS
- Mostly academic publications I send to my university’s repository (including conference presentations and the like) → RSS
- Videos posted to my YouTube channel (mostly my classes but some other material as well) → RSS
Via dlvr.it’s services, all those posts are “echoed” to Gwolfwolf in X (Twitter) and to the Gunnarwolfi page in Facebook. I use neither platform as a human (that is, I never log in there).
Anyway, dlvr.it sent me a mail stating they would be soon (as in, the next few weeks) cutting their free tier. And, although I value their services and am thankfulfor their value so far, I am not going to pay for my personal stuff to be reposted to social media.
So, this post’s mission is twofold:
- If you follow me via any of those media, you will soon not be following me anymore 😉
- If you know of any service that would fill the space left by dlvr.it, I will be very grateful. Extra gratefulness points if the option you suggest is able to post to accounts in less-propietary media (i.e. the Fediverse). Please tell me by mail (gwolf@gwolf.org).
Oh! Forgot to mention: Of course, my blog will continue to be appear in Planet Debian, Blografía, and any decent aggregator that consumes my RSS.
#! code: Drupal 10: An Introduction To Batch Processing With The Batch API
The Batch API is a powerful feature in Drupal that allows complex or time consuming tasks to be split into smaller parts.
For example, let's say you wanted to run a function that would go through every page on you Drupal site and perform an action. This might be removing specific authors from pages, or removing links in text, or deleting certain taxonomy terms. You might create a small loop that just loads all pages and performs the action on those pages.
This is normally fine on sites that have a small number of pages (i.e. less than 100). But what happens when the site has 10,000 pages, or a million? Your little loop will soon hit the limits of PHP execution times or memory limits and cause the script to be terminated. How do you know how far your loop progressed through the data? What happens if you tried to restart the loop?
The Batch API in Drupal solves these problems by splitting this task into parts so that rather than run a single process to change all the pages at the same time. When the batch runs a series of smaller tasks (eg. just 50 pages at a time) are progressed until the task is complete. This means that you don't hit the memory or timeout limits of PHP and the task finishes successfully and in a predictable way. Rather than run the operation in a single page request the Batch API allows the operation to be run through lots of little page request, each of which nibbles away at the task until it is complete.
This technique can be used any a variety of different situations. Many contributed modules in Drupal make use of this feature to prevent processes taking too long.
Debian Brasil: Debian Day 30 years at IF Sul de Minas, Pouso Alegre - Brazil
by Thiago Pezzo and Giovani Ferreira
Local celebrations of Debian 2024 Day also happened on [Pouso Alegre, MG, Brazil] (https://www.openstreetmap.org/relation/315431). In this year we managed to organize two days of lectures!
On the 14th of August 2024, Wednesday morning, we were on the [Federal Institute of Education, Science and Technology of the South of Minas Gerais] (https://portal.ifsuldeminas.edu.br/index.php), (IFSULDEMINAS), Pouso Alegre campus. We did an introductory presentation of the Project Debian, operating system and community, for the three years of the Technical Course in Informatics (professional high school). The event was closed to IFSULDEMINAS students and talked to 60 people.
On August 17th, 2024, a Saturday morning, we held the event open to the community at the University of the Sapucaí Valley (Univás), with institutional support of the Information Systems Course. We speak about the Debian Project with Giovani Ferreira (Debian Developer); about the Debian pt_BR translation team with Thiago Pezzo; about everyday experiences using free software with Virginia Cardoso; and on how to set up a development environment ready for production using Debian and Docker with Marcos António dos Santos. After the lectures, snacks, coffee and cake were served, while the participants talked, asked questions and shared experiences.
We would like to thank all the people who have helped us:
- Michelle Nery (IFSULDEMINAS) and André Martins (UNIVÁS) for the aid in the local organization
- Paulo Santana (Debian Brazil) by the general organization
- Virginia Cardoso, Giovani Ferreira, Marco António and Thiago Pezzo for the lectures
- And a special thanks to all of you who participated in our celebratio
Some pictures: