Feeds

EV Charging is Boring ...

Planet KDE - Thu, 2023-01-26 07:18

… was the response from many manufacturers at this year’s Consumer Electronics Show (CES) in Las Vegas. CES 2023 was heavily focused on automotive with a big presence from EV charging manufacturers, for both domestic (at home) or commercial (fleet charging) applications. 

The demand for charging options is growing along with the adoption of electric cars (EVs). The market for EV chargers is expanding, and more people are wanting to transition to electric. This presents a big opportunity for EV charger manufacturers. We'll examine more closely at the impending rise of electric charging in this blog post, as well as what it means for EV charger producers.

Categories: FLOSS Project Planets

Codementor: Shortest Program in Language C

Planet Python - Thu, 2023-01-26 04:55
lets learn the shortest program in C, the absolute basics in Language C
Categories: FLOSS Project Planets

Bálint Réczey: How to speed up your next build with Firebuild?

Planet Debian - Thu, 2023-01-26 04:06

TL;DR: Just prefix your build command (or any command) with firebuild:

firebuild <build command>

OK, but how does it work?

Firebuild intercepts all processes started by the command to cache their outputs. Next time when the command or any of its descendant commands is executed with the same parameters, inputs and environment, the outputs are replayed (the command is shortcut) from the cache instead of running the command again.

This is similar to how ccache and other compiler-specific caches work, but firebuild can shortcut any deterministic command, not only a specific list of compilers. Since the inputs of each command is determined at run time firebuild does not need a maintained complete dependency graph in the source like Bazel. It can work with any build system that does not implement its own caching mechanism.

Determinism of commands is detected at run-time by preloading libfirebuild.so and interposing standard library calls and syscalls. If the command and all its descendants’ inputs are available when the command starts and all outputs can be calculated from the inputs then the command can be shortcut, otherwise it will be executed again. The interception comes with a 5-10% overhead, but rebuilds can be 5-20 times, or even faster depending on the changes between the builds.

Can I try it?

It is already available in Debian Unstable and Testing, Ubuntu’s development release and the latest stable version is back-ported to supported Ubuntu releases via a PPA.

How can I analyze my builds with firebuild?

Firebuild can generate an HTML report showing each command’s contribution to the build time. Below are the “before” and “after” reports of json4s, a Scala project. The command call graphs (lower ones) show that java (scalac) took 99% of the original build. Since the scalac invocations are shortcut (cutting the second build’s time to less than 2% of the first one) they don’t even show up in the accelerated second build’s call graph. What’s left to be executed again in the second run are env, perl, make and a few simple commands.

First run (102% of vanilla build) Second run (1.4 % of vanilla build) Command details

The upper graphs are the process trees, with expandable nodes (in blue) also showing which command invocations were shortcut (green). Clicking on a node shows details of the command and the reason if it was not shortcut.

Could I accelerate my project more?

Firebuild works best for builds with CPU-intensive processes and comes with defaults to not cache very quick commands, such as sh, grep, sed, etc., because caching those would take cache space and shortcutting them may not speed up the build that much. They can still be shortcut with their parent command. Firebuild’s strength is that it can find shortcutting points in the process tree automatically, e.g. from sh -c 'bash -c "sh -c echo Hello World!"' bash would be shortcut, but none of the sh commands would be cached. In typical builds there are many such commands from the skip_cache list. Caching those commands with firebuild -o 'processes.skip_cache = []' can improve acceleration and make the reports smaller.

Firebuild also supports several debug flags and -d proc helps finding reasons for not shortcutting commands:

... FIREBUILD: Command "/usr/bin/make" can't be short-cut due to: Executable set to be not shortcut, {ExecedProcess 1329.2, running, "make -f debian/rules build", fds=[{FileFD fd=0 {FileOFD ... FIREBUILD: Command "/usr/bin/sort" can't be short-cut due to: Process read from inherited fd , {ExecedProcess 4161.1, running, "sort", fds=[{FileFD fd=0 {FileOFD ... FIREBUILD: Command "/usr/bin/find" can't be short-cut due to: fstatfs() family operating on fds is not supported, {ExecedProcess 1360.1, running, "find -mindepth 1 ... ...

make, ninja and other incremental build tools are not accelerated because they compare the timestamp of files, but they are fast at least. Ideally the slower build steps can be re-implemented in ways that can be shortcut by firebuild.

I hope those tools help speeding up your build with very little effort, but if not and you find something to fix or improve in firebuild itself, please report it or just leave a feedback!

Happy speeding, but not on public roads!

Categories: FLOSS Project Planets

Codementor: C Programming in 2023 ?

Planet Python - Thu, 2023-01-26 02:54
is it worth to learn language C
Categories: FLOSS Project Planets

Matt Brown: Vision, Mission and Strategy

Planet Debian - Wed, 2023-01-25 23:30

This is part one of a two-part post, covering high-level thoughts around my motivations and vision. Part two (to be published tomorrow) contains my specific goals for 2023.

A new year is upon us! My plan was to be 6 months into the journey of starting a business by this point.

I made some very tentative progress towards that goal in 2022, registering a company and starting some consulting work, but on the whole I’ve found it much harder than expected to gather the necessary energy to begin that journey in earnest.

Reflection

I’m excited about the next chapter of my career, so the fact that I’ve been struggling to get started has been frustrating. The only upside is that the delay has given me plenty of time to reflect on the last few years and what I can learn from them and draw some lessons to help better manage and sustain my energy going forward.

Purpose

A large part of what I’ve realised is that I should have left Google years ago. It was a great place to work, and I’m incredibly grateful for everything I learned and received during my time there. For years it was my dream job, but my happiness had been declining, and instead of taking the (relatively small) risk of leaving to the unknown, I tried several variations of team and role in the hope of restoring the dream.

The reality is that a significant chunk of my motivation and energy comes from being able to link my work back to a bigger purpose that delivers concrete positive impact in the world. I felt that link through Google’s mission to make information universally accessible and useful for the first 10-11 years, but for the latter 4-5 years my ability to see that link was tenuous at best and trying to push through the challenges presented without that link providing a reliable source of energy is what drove my unhappiness and led to needing a longer break to recharge.

I expect the challenges of starting a business to be even greater than what I experienced at Google, so the lesson I’m taking from this is that it’s crucial for me to understand what the link between my work and the bigger purpose with concrete positive impact in the world that I’m aiming to contribute to is.

Community

The second factor that I’ve slowly come to realise has been missing from my career in the last few years has been participation in a professional community and a variety of enriching interpersonal relationships. As much as I value and need this type of interaction, fostering and sustaining it unfortunately doesn’t come naturally to me. Working remotely since 2016 and then taking a 9 month break out of the industry are not particularly helpful contributors to building and maintaining a wide network either!

The lesson here is simply that I’m going to need to push past my comfort zone in reaching out and introducing myself to a range of people in order to grow my professional network, and equally I need to be diligent and disciplined in making time to maintain and regularly connect with people whom I respect and find energising to interact with.

Personal Influences

Lastly, I’ve been reflecting on a set of principles that are important to me. These are not so much new lessons, more confirming to myself what I value moving forward. There are many things I could include here, but to keep it somewhat brief, the key influences on my thinking are:

  • Independence - I can’t entirely explain why or where it comes from, but since the start of my professional career (which I consider to be my consulting/freelancing development during high school) I’ve understood that I’m far more motivated by building and growing my own business than I am by working for someone else. Working for myself has always felt like the default and sensible course - I’m excited to get back to that.

  • Openness - Open is better than closed, in terms of software, business model and organisational processes. This continues to be a strong belief and something I want to uphold in my business endeavours. Competition should be based on superior technical quality or service, not artificial constraints or barriers to entry that lock customers and users into a single solution or market. Protocols and networks should be open for wide participation and easily accessible to new entrants and competition.

  • People first - This applies both to how we work with each other - respectfully, valuing diversity and with integrity, and to how we apply technology to our world - with consideration for all stakeholders it may affect and awareness of both the intended and potential unintended impacts.

Framework

Using Vision, Mission and Strategy as a planning framework has worked quite well for me when building and growing teams over the years, so I plan to re-use it personally to help organise the above reflections into a hopefully cohesive plan than results in some useful 2023 goals.

Vision

Software systems contribute direct and meaningful impact to solving real problems in our world.

Each word has a fair bit of meaning behind it for me, so breaking it down a little bit:

  • software systems - excite me because software is eating the world and has significant potential to do good.
  • contribute - Software alone doesn’t solve problems, and misapplied can easily make things worse. To contribute software needs to be designed intentionally and evaluated with an awareness of risks it could pose within the complex system that is our modern world.
  • direct and meaningful impact - I’m not looking for broad outcomes like improving productivity or communication, which apply generally across many problems. I want to see software applied to solve specific blockers whose removal unlocks significant progress towards solving a problem.
  • real - as opposed to straightforward problems. The types of issue where the acknowledgement of it as a “real problem” often ends the sentence as it feels too big to tackle. Climate change and pandemic risk are examples of real problems. Decentralising finance or selling more widgets are not.
  • in our world - is mostly filler to round out the sentence nicely, but I do think we should probably sort out the mess we’re making on our own planet before trying to colonise anywhere else.
Mission

To lead the development and operation of software systems that deliver new opportunities for individuals, businesses and communities to solve the real problems in their community.

Again breaking down the intent a little bit:

  • lead - having a meaningful impact on real problems is a big job. I won’t succeed as a one man band. It will require building and growing a larger team.
  • development and operation - development is fun and necessary, but I also wanted to highlight that the ongoing operation and integration of those software systems into the broader social and human systems of our world is an equally important and ongoing need.
  • new opportunities - are important to drive and motivate investment in the adoption of technology. Building or operating a system that maintains the status quo is not motivating for me.
  • individuals, businesses and communities - aka everyone! But each of these groups (as examples, not specific) will have diverse roles, needs and interactions with the software which must be considered to ensure the system achieves the desired contribution and impact.
  • their community - refines the ambition from the vision to an achievable scope of action within which to execute the mission. We won’t solve our problems by targeting one big global fix, but if we each participate in solving the problems in our community, collectively it will make a difference.
Strategy

Build a sustainable business that provides a home and infrastructure to support a continuous cycle of development, validation and growth of software systems fulfilling the mission and vision above.

  • Accumulate meaningful impact via a portfolio of systems rather than one big bet.
  • Focus on opportunities that promote the decarbonisation of our economy (the most pressing problem our society faces), but not at the expense of ignoring compelling opportunities to contribute impact to other real problems also.
  • Favour the marathon over the sprint - while being first can be fun and convey benefits, it’s often the fast-followers who learn from the initial mistakes and deliver lasting change and broader impact.

In keeping with the final bullet point, I aim to evaluate the strategy against a long-term view of success. What excites me about it is that it has the potential to provide structure and clarity for my work while also enabling many future paths - from operating a portfolio of micro-SaaS products that each solve real problems for a specific niche or community, or diving deep into a single compelling opportunity for a year or two, joining with others to partner on shared ventures or some combination of all three and other variations in between.

Your Thoughts

I consider this a first draft, which I intend to revise and evolve further over the next 6-12 months. I don’t plan major changes to the intent or underlying ideas, but finding the best words to express and convey that intent clearly is not something I expect to get right on the first take.

I’d love to have your feedback and engagement as I move forward with this strategy - please use the box in the sidebar (or on the front page, if you’re on a phone) to be notified when I post new writing, drop me an email with your thoughts or even book a meeting to say hi and discuss something in detail.

Categories: FLOSS Project Planets

Peoples Blog: Multisite Local environment setup with DDEV and Drupal

Planet Drupal - Wed, 2023-01-25 23:00
In this article we are going to see how we can set up a multisite environment with ddev on the local machine. Assuming people are aware of configuring the drupal multiple site from the drupal side of configurations. As we all know ddev is an open source tool for running local PHP development environments in minutes, which makes developer life easier during the local environment setup process, her
Categories: FLOSS Project Planets

Dirk Eddelbuettel: RcppTOML 0.2.1 on CRAN: Small Build Fix for Some Arches

Planet Debian - Wed, 2023-01-25 19:54

Two weeks after the release of RcppTOML 0.2.0 and the switch to toml++, we have a quick bugfix release 0.2.1.

TOML is a file format that is most suitable for configurations, as it is meant to be edited by humans but read by computers. It emphasizes strong readability for humans while at the same time supporting strong typing as well as immediate and clear error reports. On small typos you get parse errors, rather than silently corrupted garbage. Much preferable to any and all of XML, JSON or YAML – though sadly these may be too ubiquitous now. TOML is frequently being used with the projects such as the Hugo static blog compiler, or the Cargo system of Crates (aka “packages”) for the Rust language.

Some architectures, aarch64 included, got confused over ‘float16’ which is of course a tiny two-byte type nobody should need. After consulting with Mark we concluded to (at least for now) simply override this excluding the use of ‘float16’.

The short summary of changes follows.

Changes in version 0.2.1 (2023-01-25)
  • Explicitly set -DTOML_ENABLE_FLOAT16=0 to permit compilation on some architectures stumbling of the type.

Courtesy of my CRANberries, there is a diffstat report for this release. More information is on the RcppTOML page page. Please use the GitHub issue tracker for issues and bugreports.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

FSF Blogs: Thank you and a very warm welcome to our new members

GNU Planet! - Wed, 2023-01-25 16:17
January 20, 2023 marked the end of our most recent fundraising campaign and associate member drive. We are proud to add 330 new associate members to our organization, and we have immense appreciation for the community that helped us get there. Please help us share our appreciation.
Categories: FLOSS Project Planets

Thank you and a very warm welcome to our new members

FSF Blogs - Wed, 2023-01-25 16:17
January 20, 2023 marked the end of our most recent fundraising campaign and associate member drive. We are proud to add 330 new associate members to our organization, and we have immense appreciation for the community that helped us get there. Please help us share our appreciation.
Categories: FLOSS Project Planets

poke @ Savannah: GNU poke 3.0 released

GNU Planet! - Wed, 2023-01-25 14:08

I am happy to announce a new major release of GNU poke, version 3.0.

This release is the result of a year of development.  A lot of things have changed and improved with respect to the 2.x series; we have fixed many bugs and added quite a lot of new exciting and useful features.  See below for a description of many of them.

From now on, we intend to do not one but two major releases of poke every year.  What is moving us to change this is the realization that users have to wait for too long to enjoy new features, which are continuously being added in a project this young and active.

The tarball poke-3.0.tar.gz is now available at
https://ftp.gnu.org/gnu/poke/poke-3.0.tar.gz.

> GNU poke (http://www.jemarch.net/poke) is an interactive, extensible editor for binary data.  Not limited to editing basic entities such as bits and bytes, it provides a full-fledged procedural, interactive programming language designed to describe data structures and to operate on them.


Thanks to the people who contributed with code and/or documentation to this release.  In certain but no significant order they are:

   Mohammad-Reza Nabipoor
   Arsen Arsenović
   Luca Saiu
   Bruno Haible
   apache2
   Indu Bhagat
   Agathe Porte
   Alfred M. Szmidt
   Daiki Ueno
   Darshit Shah
   Jan Seeger
   Sergio Durigan Junior

   ... and yours truly

As always, thank you all!

But wait, this time we also have special thanks:

To Bruno Haible for his invaluable advise and his help in throughfully testing this new release in many different platforms and configurations.

To the Sourceware overseers, Mark Wielaard, Arsen Arsenović, and Sam James for their help in setting up the buildbots we are using for CI at sourceware.

What is new in this release:

User interface updates
  • A screen pager has been added to the poke application.  If enabled with the `.set pager yes' option, output will be paged one screenful at a time.
  • A tracer has been added to libpoke and the poke application. If enabled with the `.set tracer yes' option, subsequent loaded Poke types will be instrumentalized so calls to user-defined handlers are executed when certain events happen:
    • Every time a field gets mapped.
    • Every time a struct/union gets mapped.
    • Every time a field gets constructed.
    • Every time a struct/union gets constructed.
    • Every time an optional field is omitted when mapping or constructing.
  • A new command sdiff (for "structured diff") has been added to the poke application, that provides a way to generate patchable diffs of mapped structured Poke values.  This command is an interface to the structured diffs provided by the new diff.pk pickle.
  • When no name is passed to the .mem command, an unique name for the memory IOS with the form N will be used automatically, where N is a positive integer.
  • Auto-completion of 'attributes is now available in the poke application.
  • Constraint errors now contain details on the location (which field) where the constraint error happens, along with the particular expression that failed.
  • Inline assembler expressions and statements are now supported:

    ,----
    | asm (TEMPLATE [: OUTPUTS [: INPUTS]])
    | asm TYPE: (TEMPLATE [: INPUTS])
    `----

  • Both `printf' and `format' now support printing values of type `any'.
  • Both `printf' and `format' now support printing integral values interpreted as floating-point values encoded in IEEE 754.  Format tags %f, %g and %e are supported.  This feature, along with the new ieee754.pk pickle, eases dealing with floating-point data in binary data.
  • Pre-conditional optional fields are added to complement the currently supported post-conditional optional fields. A pre-conditional optional field like the following makes FNAME optional based on the evaluation of CONDITION.  But the field itself is not mapped if the condition evaluates to false:

    ,----
    | if (CONDITION)
    |   TYPE FNAME;
    `----

  • A new option `.set autoremap no' can be used in order to tell poke to not remap mapped values automatically.  This greatly speeds up things, but assumes that the contents of the IO space are not updated out of the control of the user.  See the manual for details.
  • The :to argument to the `extract' command is now optional, and defaults to the empty string.
  • ${XDG_CONFIG_HOME:-$HOME/.config} is now preferred to XDG_CONFIG_DIRS.
Poke Language updates
  • Array and struct constructors are now primaries in the Poke syntax. This means that it is no longer necessary to enclose them between parenthesis in constructions like:

    ,----
    | (Packet {}).field
    `----

    and this is now accepted:
    ,----
    | Packet {}.field
    `----

  • Bit-concatenation is now supported in l-values.  After executing the following code the value of `a' is 0x1N and the value of `b' is (uint<28>)0x2345678:

    ,----
    | var a = 0 as int<4>;
    | var b = 0 as uint<28>;
    |
    | a:::b = 0x12345678;
    `----

  • Arrays can now be indented by size, by specifying an offset as an index.  This is particularly useful for accessing structures such as string tables without having to explicitly iterate on the array's elements.
  • Union types can now be declared as "integral".  The same features of integral structs are now available for unions: integration, deintegration, the ability of being used in contexts where an integer is expected, etc.
  • Support for "computed fields" has been added to struct and union types.  Computed fields are accessed just like regular fields, but the semantics of referring to them and of assigning to them are specified by the user by the way of defining getter and setter methods.
  • This version introduces three new Poke attributes that work on values of type `any':

    ,----
    | VAL'elem (N)
    |    evaluates to the Nth element in VAL, as a value of type `any'.
    |
    | VAL'eoffset (N)
    |    evaluates to the offset of the Nth element in VAL.
    |
    | VAL'esize (N)
    |    evaluates to the size of the Nth element in VAL.
    |
    | VAL'ename (N)
    |    attribute evaluates to the name of the Nth element in VAL.
    `----

  • Two new operators have been introduced to facilitate operating Poke array as stacks in an efficient way: apush and apop.  Since these operators change the size of the involved arrays, they are only allowed in unbounded arrays.
  • Poke programs can now hook in the IO subsystem by installing functions that will be invoked when certain operations on IO spaces are being performed:

    ,----
    | ios_open_hook
    |   Functions in this hook are invoked once a new IO space has been
    |   opened.
    |
    | ios_set_hook
    |   Functions in this hook are invoked once the current IO space
    |   changes.
    |
    | ios_close_pre_hook
    | ios_close_hook
    |   Functions in these hooks are invoked before and after an IO space is
    |   closed, respectively.
    `----

  • The 'length attribute is now valid in values of type `any'.
  • Poke declarations can now be annotated as `immutable'.  It is not allowed to re-define immutable definitions.
  • A new compiler built-in `iolist' has been introduced, that returns an array with the IO space identifiers of currently open IOS.
  • We have changed the logic of the EXCOND operator ?!.  It now evaluates to 1 (true) if the execution of the first operand raises the specified exception, and to 0 (false) otherwise.  We profusedly apologize for the backwards incompatibility, but this is way better than the previous (reversed) logic.
  • The containing struct or union value can now be refered as SELF in the body of methods.  SELF is of type `any'.
  • Integer literal suffixes (B, H, U, etc) are case-insensitive. But until now little-case `b' wasn't being recognized as such.  Now `1B' is the same than `1b'.
  • Casting to union types now raise a compile-time error.
  • If no explicit message is specified in calls to `assert', a default one showing the source code of the failing condition is constructed and used instead.
  • An operator `remap' has been used in order to force a re-map of some mapped Poke value.
  • Signed integral types of one bit are not allowed.  How could they be, in two's complement?
  • The built-in function get_time has been renamed to gettime, to follow the usual naming of the corresponding standard C function.
Standard Poke Library updates
  • New standard functions:

    ,----
    | eoffset (V, N)
    |   Given a value of type `any' and a name, returns the offset of
    |   the element having that name.
    |
    | openset (HANDLER, [FLAGS])
    |   Open an IO space and make it the current IO space.
    |
    | with_temp_ios ([HANDLER], [FLAGS], [DO], [ENDIAN])
    |   Execute some code with a temporary IO space.
    |
    | with_cur_ios (IOS, [DO], [ENDIAN])
    |   Execute some code on some given IO space.
    `----

libpoke updates
  • New API function pk_struct_ref_set_field_value.
  • New API function pk_type_name.
Pickles updates
  • New pickles provided in the poke distribution:

    ,----
    | diff.pk
    |   Useful binary diffing utilities.  In particular, it implements
    |   the "structured diff" format as described in
    |   https://binary-tools.net/bindiff.pdf.
    |
    | io.pk
    |   Facilities to dump data to the terminal.
    |
    | pk-table.pk
    |   Convenient facilities to Poke programs to print tabulated data.
    |
    | openpgp.pk
    |   Pickle to poke at OpenPGP RFC 4880 data.
    |
    | sframe.pk
    | sframe-dump.pk
    |   Pickles for the SFrame unwinding format, and related dump
    |   utilities.
    |
    | search.pk
    |   Utility for searching data in IO spaces that conform to some
    |   given Poke type.
    |
    | riscv.pk
    |   Pickle to poke at instructions encoded in the RISC-V instruction
    |   set (RV32I).  It also provides methods to generate assembly
    |   language.
    |
    | coff.pk
    | coff-aarch64.pk
    | coff-i386.pk
    |   COFF object files.
    |
    | pe.pk
    | pe-amd64.pk
    | pe-arm.pk
    | pe-arm64.pk
    | pe-debug.pk
    | pe-i386.pk
    | pe-ia64.pk
    | pe-m32r.pk
    | pe-mips.pk
    | pe-ppc.pk
    | pe-riscv.pk
    | pe-sh3.pk
    |   PE/COFF object files.
    |
    | pcap.pk
    |   Capture file format.
    |
    | uuid.pk
    |   Universally Unique Identifier (UUID) as defined by RFC4122.
    |
    | redoxfs.pk
    |   RedoxFS files ystem of Redox OS.
    |
    | ieee754.pk
    |   IEEE Standard for Floating-Point Arithmetic.
    `----

  • The ELF pickle now provides functions implementing ELF hashing.
Build system updates
  • It is now supported to configure the poke sources with --disable-hserver.
Documentation updates
  • Documentation for the `format' language construction has been added to the poke manual.
Other updates
  • A new program poked, for "poke daemon", has been contributed to the poke distribution by Mohammad-Reza Nabipoor.  poked links with libpoke and uses Unix sockets to act as a broker to communicate with an instance of a Poke incremental compiler.  This is already used by several user interfaces to poke.
  • The machine-interface subsystem has been removed from poke, in favor of the poked approach.
  • The example GUI that was intended to be a test tool for the machine interface has been removed from the poke distribution.
  • Many bugs have been fixed.

--
Jose E. Marchesi
Frankfurt am Main
26 January 2023

Categories: FLOSS Project Planets

The 2023 State of Open Source Report confirms security as top issue

Open Source Initiative - Wed, 2023-01-25 10:43

For the second year in a row, the Open Source Initiative and OpenLogic by Perforce collaborated to launch a global survey about the use of Open Source software in organizations. We drew hundreds of responses from all over the world, and once again, the results are illustrative of the Open Source space as a whole, including use, adoption, challenges, and the level of investment and maturity in Open Source software. 

The 2023 State of Open Source Report presents key usage, adoption, and trend data that paints a complete picture of Open Source software in organizations today. The report also includes a breakdown of the most important technologies by category, and across demographics and firmographics. 

The world of technology is constantly changing, and it can be hard to stay up to date on the latest software. The report features more than 160 of the most popular Open Source technologies and tools, as well as insights into how organizations are investing in Open Source and the most desirable technologies.  

We encourage you to read sections of interest or the whole report, which covers every major category including Linux distributions, infrastructure software, cloud-native, programming languages and runtimes, frameworks, data technologies, SDLC and build tools, automation and configuration tooling, and of course, CI/CD. 

Some of the key findings: 

  • Open Source continues to grow in prominence; 4 in 5 survey respondents, a whopping 80%, indicated that they increased the use of Open Source software in their organizations in the past year, with 41% reporting a “significant” increase.  
  • Open Source technologies play an integral role in all types of operations. Respondents listed Linux, Apache HTTP, Git, Node.js, WordPress, Tomcat, Jenkins, PHP, and NGINX as the most business-critical software for their organizations.  
  • Container technology and software development lifecycle (SDLC) tools ranked as the most used technologies. Container and container orchestration jumped from 18% to 33% of respondents’ usage, and they also received the highest amount of investment by organizations. 
  • Cost reduction is no longer a key reason for Open Source adoption. In the 2022 report, the lack of license cost and overall cost reduction was the second most common reason for using Open Source, but this year it has dropped to ninth place.  
  • The top Open Source adoption driver remains access to innovations and the latest technologies, illustrating how users value being on the cutting edge and see this as a competitive advantage. Organizations also choose Open Source due to the ability to contribute to, and influence the direction of, projects.  
  • Security is top of mind. Maintaining security policies or compliance is the top support challenge for organizations using Open Source. Over 46% of organizations are performing security scans to identify vulnerabilities. 
  • Technical support is needed for installations, upgrades, and configuration issues. Notably, personnel experience and proficiency again this year is highly ranked as a support concern across organizations of all sizes.  
  • End-of-life (EOL) Open Source software remains in organizations for a long time. Nearly 12 months after AngularJS became EOL, 15% of organizations are still using it, the exact same percentage we saw in the 2022 report. In larger organizations, it’s up to 20%. As expected with EOL CentOS Linux, there was a decline in usage; it’s now at only 15.14%, while CentOS Stream and Rocky Linux became more widely adopted.  
  • 36.79% of organizations contribute to Open Source, which includes contributions to projects or to organizations (code or other activities). This is a 5% increase from last year, so it’s trending in the right direction and is a good sign for many communities. 
  • Over 25% of respondents in most industries are generating software bill of materials (SBOMs). Retail, government, banking, insurance, and financial services lead this category with the highest implementation of SBOM generation. 
  • OSI’s membership has grown over the last year; 17% of respondents already sponsor OSI. We are encouraged by growing community participation and excited for all upcoming OSI initiatives and events in 2023. 

The 2023 State of Open Source Report clearly demonstrates how many organizations are moving from being merely consumers to engaging with Open Source communities and gaining expertise in full technology stacks. In some cases, they are even becoming leaders — driving and influencing the direction of new projects. Be sure to download the report and stay tuned for more content, analysis, and webinars in the coming weeks and months from OSI and OpenLogic by Perforce! 

Categories: FLOSS Research

GNU Guile: GNU Guile 3.0.9 released

GNU Planet! - Wed, 2023-01-25 09:25

We are pleased to announce the release of GNU Guile 3.0.9! This release fixes a number of bugs and adds several new features, among which:

  • New bindings for POSIX functionality, including bindings for the at family of functions (openat, statat, etc.), a new spawn procedure that wraps posix_spawn and that system* now uses, and the ability to pass flags such as O_CLOEXEC to the pipe procedure.
  • A new bytevector-slice procedure.
  • Reduced memory consumption for the linker and assembler.

For full details, see the NEWS entry, and check out the download page.

Happy Guile hacking!

Categories: FLOSS Project Planets

PyCharm: In Conversation With the Reloadium Team: Hot Reload and a Future Webinar

Planet Python - Wed, 2023-01-25 09:18

PyCharm is working hard on Python developer experience (DX). There’s a project with a very promising DX boost using “hot reloading”: Reloadium. It really speeds up turnaround time on working with your code, and with the PyCharm plugin, brings fresh new ideas to running, debugging, and profiling.

On January, 27 we have a webinar with Reloadium to show it in action. As an intro, we did a Q&A with the team.

Register now!

Quick Hit: Why should people care about Reloadium?

DK: Reloadium is a valuable tool for developers that offers hot reloading, shortening development cycle and preserving application state for debugging. Its PyCharm plugin integration makes Reloadium easy to use and offers additional features for efficient debugging. It’s a powerful tool for improving workflow and streamlining development.

Now, some introductions, starting with Sajan, who will be presenting.

ST: Thank you for the opportunity to be a part of this, Paul, we are very excited to share a glimpse into the problem we are solving for Python developers. My name is Sajan, I am a Business Psychologist specialising in the areas of business development, change management, and human factors. I have been working closely with Damian to create and improve Reloadium by identifying the gaps that exist in terms of productivity with Python and finding ways to close these gaps. 

DK: My name is Damian and I am the creator of Reloadium. I am currently employed as a Technical Lead and Full-Stack Developer on a full-time basis. The concept for Reloadium originated from a project I was working on, where I found that I had to frequently deal with long restart times. I created an initial version of Reloadium and used it to optimize my workflow, cutting down the development time significantly. That experience motivated me to continue to develop the tool and make it available to other developers. I am dedicated to providing a reliable and user-friendly hot reloading solution that can improve the development experience and increase efficiency in the programming process.

Explain hot reloading, especially from its use in frontend dev.

ST: Hot reloading is a valuable tool for developers. It allows developers to make updates to an application’s source code and see the resulting changes in real-time. This feature streamlines the development process by eliminating the need for manual application restarts, thus saving time and improving the efficiency of the development cycle. Additionally, one of the key benefits of hot reloading is the ability to preserve the state of the application, which can be particularly useful during bug fixing and troubleshooting.

In a front-end development context, hot reloading enhances the development experience by providing instant feedback on the effect of code changes, allowing developers to make adjustments and iterate quickly without the need for manual page refreshes. 

Hot reloading is a powerful feature that enables developers to work more efficiently, improve their workflow and deliver better-quality software.

Others worked on this. Damian, you then went off for a long time. What are the subtle, hard parts?

DK: Implementing hot reloading in Python has proven to be a challenging task, as previous attempts have not been able to consistently provide a reliable and user-friendly experience. One of the main difficulties in this process is maintaining compatibility with subtle variations in different versions of Python and operating systems. To ensure consistent quality between releases, it has been necessary to implement rigorous unit and end-to-end testing procedures. This has been a crucial step in identifying and addressing any issues that may arise during the development process, and in ensuring that the final product is stable and reliable for users. 

What are some of the fresh ideas Reloadium brings to DX?

ST: Reloadium brings several cutting-edge features to the developer experience, perhaps the most notable of which is Frame Reloading. This feature allows developers to hot reload the current call stack frame, enabling them to modify code that is currently being executed. This is a significant innovation in the field of software development, as it allows developers to make changes while debugging, which can save a significant amount of time. Another innovative feature is Hot Reloading of Unhandled Exceptions. Reloadium will break at unhandled exceptions and allow developers to fix the error by hot reloading changes. This feature is particularly useful when debugging non-deterministic bugs, as it can save countless hours in the reproducing and resolving of these issues.

How does the PyCharm plugin help improve the DX part?

DK: The PyCharm plugin provides a seamless integration of Reloadium, making it an invaluable tool for improving the development experience. In addition to integrating the core hot reloading functionality of Reloadium, the plugin also includes several advanced debugging features, such as time and memory profiling, call stack frame dropping, restarting frames, and hot reloading of unhandled exceptions. These features provide developers with more robust tools for identifying and resolving issues, thus increasing the efficiency of the development process. Furthermore, the plugin provides visual feedback on hot reloading and code execution, which greatly improves the user experience. Overall, the PyCharm plugin’s integration of Reloadium and advanced debugging features make it a powerful tool for enhancing the development experience.

Another fresh idea: you’ve formed a company to make this sustainable. Tell us about that.

ST: We are in the process of forming a company that focuses on developer tools like Reloadium to improve productivity, and ultimately helps in making programming easier. We are committed to making programming more developer-friendly and accessible so that people can feel free to design, create, and innovate. Through this company, we can keep improving and managing products like Reloadium so that it stays relevant, and continues to help improve productivity for developers.

Give us a teaser for what folks will see in the webinar.

ST:  In this webinar, we will explore:

  • how developers can experience Reloadium’s out-of-the-box experience on PyCharm
  • Reloadium’s hot-reloading feature in action during front-end and back-end development scenarios 
  • The means of further optimising your workflow by using Reloadium’s time-profiling and memory-profiling features. 

Register now!

We urge you to join us in this journey to learn about this pioneering DevTool so that you can make more time to design, create, and innovate.  

Categories: FLOSS Project Planets

Real Python: The Python Standard REPL: Try Out Code and Ideas Quickly

Planet Python - Wed, 2023-01-25 09:00

The Python standard shell, or REPL (Read-Eval-Print Loop), allows you to run Python code interactively while working on a project or learning the language. This tool is available in every Python installation, so you can use it at any moment.

As a Python developer, you’ll spend a considerable part of your coding time in a REPL session because this tool allows you to test new ideas, explore and experiment with new tools and libraries, refactor and debug your code, and try out examples.

In this tutorial, you’ll learn how to:

  • Run the Python standard REPL, or interactive shell
  • Write and execute Python code in an interactive session
  • Quickly edit, modify, and reuse code in a REPL session
  • Get help and introspect your code in an interactive session
  • Tweak some features of the standard REPL
  • Identify the standard REPL’s missing features

You’ll also learn about available feature-rich REPLs, such as IDLE, IPython, bpython, and ptpython.

To get the most out of this tutorial, you should be familiar with your operating system’s command line, or terminal. You should also know the basics of using the python command to run your code.

Free Sample Code: Click here to download the free sample code that you’ll use to explore the capabilities of Python’s standard REPL.

Getting to Know the Python Standard REPL

In computer programming, you’ll find two kinds of programming languages: compiled and interpreted languages. Compiled programming languages like C and C++ will have a compiler program, which takes care of translating the language’s code into machine code.

This machine code is typically saved into an executable file. Once you have an executable file, you can run your program on any compatible computer system without needing the compiler or the source code.

In contrast, interpreted languages like Python need an interpreter program. This means that you need to have a Python interpreter installed to run Python code on your computer. Some may consider this characteristic a drawback because it can make your code distribution process much more difficult.

However, in Python, having an interpreter offers one significant advantage that comes in handy during your development and testing process. The Python interpreter allows for what’s known as an interactive REPL (Read-Eval-Print Loop), or shell, which reads a piece of code, evaluates it, and then prints the result to the console in a loop.

Note: In this tutorial, you’ll learn about the CPython standard REPL, which is available in all the installers of this Python distribution. If you don’t have CPython yet, then check out Python 3 Installation & Setup Guide for detailed instructions.

The Python interpreter can execute Python code in two modes:

  1. Script, or program
  2. Interactive, or REPL

In script mode, you use the interpreter to run a source file as an executable program. In this case, Python loads the file content and runs the code line by line, following the script or program’s execution flow. Alternatively, interactive mode is when you launch the interpreter and use it as a platform to run code that you type in directly.

Note: The name Python is commonly used to denote two different things: the language itself, and the interpreter. In this tutorial, you’ll find the explicit term Python interpreter only in situations where ambiguity can arise.

In this tutorial, you’ll learn how to use the Python standard REPL to run code interactively, which allows you to try ideas and test concepts when using and learning Python. Are you ready to take a closer look at the Python REPL? Keep reading!

What Is Python’s Interactive Shell or REPL?

When you run the Python interpreter in interactive mode, you open an interactive shell, also known as an interactive or a REPL session. In this shell, your keyboard is the input source, and your screen is the output destination.

Note: In this tutorial, you’ll find the terms interactive shell, interactive session, interpreter session, and REPL session used interchangeably.

The input consists of Python code, which the interpreter parses and evaluates. After that’s done, the interpreter automatically displays the result on your screen, and the process starts again as a loop.

So, Python’s REPL is an interactive way to talk to your computer using the Python language. It’s like live chat. The whole process is known as a REPL because it goes through four steps that run under the hood:

  1. Reading your input, which consists of Python code as expressions and statements
  2. Evaluating your Python code, which generates a result or causes side effects
  3. Printing any output so that you can check your code’s results and get immediate feedback
  4. Looping back to step one to continue the interaction

This feature of Python is a powerful tool that you’ll wind up needing in your Python coding adventure, especially when you’re learning the language or when you’re in the early stages of a development process. That’s because the REPL offers several benefits, which you’ll learn about next.

Read the full article at https://realpython.com/python-repl/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Python for Beginners: Convert Epoch to Datetime in Python

Planet Python - Wed, 2023-01-25 09:00

Most of the software applications log date and time values as UNIX timestamps. While analyzing the logged data, we often need to convert the Unix timestamp to date and time values. In this article, we will discuss different ways to convert UNIX timestamps to datetime in python. We will also discuss how to convert a negative epoch to datetime in Python.

Table of Contents
  1. Unix Epoch to Datetime in Python
  2. Unix Timestamp to Datetime String
  3. Datetime to UNIX Timestamp in Python
  4. Convert Negative Timestamp to Datetime in Python
  5. Conclusion
Unix Epoch to Datetime in Python

To convert epoch to datetime in python, we can use the fromtimestamp() method defined in the datetime module. The fromtimestamp() method takes the epoch as its input argument and returns the datetime object. You can observe this in the following example. 

from datetime import datetime epoch=123456789 print("The epoch is:") print(epoch) datetime_obj=datetime.fromtimestamp(epoch) print("The datetime object is:") print(datetime_obj)

Output:

The epoch is: 123456789 The datetime object is: 1973-11-30 03:03:09

In this example, we have converted the epoch 123456789 to datetime value using the fromtimestamp() method.

The above approach will give you time according to the time zone on your machine. If you want to get the UTC time from the timestamp, you can use the utcfromtimestamp() instead of the fromtimestamp() method as shown below.

from datetime import datetime epoch=123456789 print("The epoch is:") print(epoch) datetime_obj=datetime.utcfromtimestamp(epoch) print("The datetime object is:") print(datetime_obj)

Output:

The epoch is: 123456789 The datetime object is: 1973-11-29 21:33:09

In this example, you can observe that the datetime output show the time approx 5 hours and 30 minutes before the time in the previous example. However, we have used the same epoch value. This difference is due to the reason that the time zone of my computer is set to +5:30 hours.

Unix Timestamp to Datetime String

To convert the Unix timestamp to a datetime string in python, we will first create a datetime object from the epoch using the fromtimestamp() method or the utcfromtimestamp() method. Then, you can use the strftime() method to convert the datetime object to a string.

The strftime() method, when invoked on a datetime object, takes the format of the required datetime string as its input argument and returns the output string. You can observe this in the following example.

from datetime import datetime epoch=123456789 print("The epoch is:") print(epoch) datetime_obj=datetime.utcfromtimestamp(epoch) print("The datetime object is:") print(datetime_obj) datetime_string=datetime_obj.strftime( "%d-%m-%Y %H:%M:%S" ) print("The datetime string is:") print(datetime_string)

Output:

The epoch is: 123456789 The datetime object is: 1973-11-29 21:33:09 The datetime string is: 29-11-1973 21:33:09

In this example, the format specifiers used in the strftime() method are as follows.

  • %d is the placeholder for date.
  • %m is the placeholder for month.
  • %Y is the placeholder for year.
  • %H is the placeholder for hour.
  • %M is the placeholder for minutes.
  • %S is the placeholder for seconds.

You can also change the position of the placeholders in the string to change the date format.

Datetime to UNIX Timestamp in Python

To convert a datetime object to UNIX timestamp in python, we can use the timestamp() method. The timestamp() method, when invoked on a datetime object, returns the UNIX epoch for the given datetime object. You can observe this in the following example.

from datetime import datetime datetime_obj=datetime.today() print("The datetime object is:") print(datetime_obj) epoch=datetime_obj.timestamp() print("The epoch is:") print(epoch)

Output:

The datetime object is: 2023-01-24 00:34:40.582494 The epoch is: 1674500680.582494

In this example, we first obtained the current datetime using the datetime.today() method. Then, we used the timestamp() method to convert datetime object to timestamp.

Convert Negative Timestamp to Datetime in Python

The UNIX timestamp or epoch is basically the number of seconds that elapsed after UTC 1st January 1970, 0 hours, 0 minutes, 0 seconds. So, if we represent a date before 1970 using an epoch, the value is negative. For example, the if we represent 31 December 1969 using epoch, it will evaluate to -86400 i.e 24 hours*3600 seconds before 01 Jan 1970. You can observe this in the following example.

from datetime import datetime epoch=-86400 print("The epoch is:") print(epoch) datetime_obj=datetime.fromtimestamp(epoch) print("The datetime object is:") print(datetime_obj)

Output:

The epoch is: -86400 The datetime object is: 1969-12-31 05:30:00

To convert a negative UNIX timestamp to datetime, you can directly pass it to the fromtimestamp() method as shown above. Here, we have specified the timestamp as -86400. Hence, the fromtimestamp() method returns the datetime 86400 seconds before the datetime 01 Jan 1970 +5:30.

Again, the above approach will give you time according to the time zone on your machine. If you want to get the UTC time from the timestamp, you can use the utcfromtimestamp() instead of the fromtimestamp() method as shown below.

from datetime import datetime epoch=-86400 print("The epoch is:") print(epoch) datetime_obj=datetime.utcfromtimestamp(epoch) print("The datetime object is:") print(datetime_obj)

Output:

The epoch is: -86400 The datetime object is: 1969-12-31 00:00:00

In this example, you can observe that the utcfromtimestamp() method returns the datetime 1969-12-31 00:00:00 which is exactly 86400 seconds before Jan 1, 1970 00:00:00.

Instead of using the above approach, you can also use the timedelta() function to convert the negative epoch to datetime.

The timedelta() function takes the negative epoch as its input argument and returns a timedelta object. After calculating the timedelta object, you can add it to the datetime object representing Jan 1, 1970. This will give you the datetime object for the negative epoch that was given as input. You can observe this in the following example.

from datetime import datetime from datetime import timedelta epoch=-86400 print("The epoch is:") print(epoch) datetime_obj=datetime(1970,1,1)+timedelta(seconds=epoch) print("The datetime object is:") print(datetime_obj)

Output:

The epoch is: -86400 The datetime object is: 1969-12-31 00:00:00 Conclusion

In this article, we have discussed different ways to convert a UNIX timestamp to datetime in python. We also discussed how to convert a negative epoch to a datetime object in python.

To learn more about python programming, you can read this article on python simplehttpserver. You might also like this article on python with open statement.

I hope you enjoyed reading this article. Stay tuned for more informative articles.

Happy Learning!

The post Convert Epoch to Datetime in Python appeared first on PythonForBeginners.com.

Categories: FLOSS Project Planets

PyBites: 10 things that hamper your Python career progress

Planet Python - Wed, 2023-01-25 08:42

We all know that becoming a Python developer is hard.

There’s the “10,000-hour” principle which means there’s a significant amount of effort and time you’re going to have to invest.

More important though is how you’ll spend that time. Are you working on the right things and tackling increasingly challenging goals?

We talk with a lot of people about their Python career goals and here are some common things that are holding them back:

  1. I do not have a clear end goal of what I wanted to do for a career.
  2. I don’t know what options there are to break into the tech industry and what niche I want to be in.
  3. I feel as though I have learned the foundations of Python but am struggling with understanding how to bridge the gap from where I am now to building a production-ready application.
  4. I am completely self-taught, I have taken numerous courses from xyz, but have not been able to feel confident enough to develop an application end-to-end. Getting to this point will give me the confidence to apply for developer type jobs.
  5. Bad time management. I often find myself caught up in other “urgent” tasks. It can be very tough to work on a programming project on and off (in an unfocused manner), a lot of time gets lost having to re-learn skills, never reaching consistent fluency.
  6. Tutorial hell. And it’s hard when I’m getting stuck in dead ends, not being able to find the answers on the Internet.
  7. Impostor syndrome, not feeling confident enough and not knowing where my skill/experience level fits compared to other developers.
  8. I just haven’t got the ‘developer mindset‘ yet e.g., “How would a dev approach a particular problem?”
  9. Sites with simple coding tasks tend to focus on little parts of potentially big things so I do not see a good way to learn without building a bigger project, with a mentor, gaining more holistic developer skills. I lack feedback and code reviews from more experienced developers.
  10. My main obstacle is a lack of direction (not having anybody to bounce questions or ideas off of) and a sort of decision paralysis given the sheer number of options, resources, libraries and “best practices” available.

Do one or more of these things resonate with you?

Do you want to get to the next level as a Python developer but you feel these types of reasons are holding you back too?

Then we have good news. You can resolve all of these things in just 3 months by putting in ~10 hours of consistent effort a week.

Our team of expert coaches can show you the right way, and once exposed to it, you won’t look back to your old way of doing things.

Working with us:

  • You will get a crystal clear understanding of your goals, what will (and won’t) matter in your career moving forward.
  • You will ship code, specifically two or three fully-fledged applications of your choosing (!) – a unique approach we take which teaches you Python, common tools and libraries, while at the same time you’ll build up your portfolio (people that showcase their projects land jobs!)
  • Code reviewing, pair programming, design reviewing… it’s like working in a dev team before you officially land such a job (fake it before you make it in a safe environment)
  • You’ll learn to push through doubts and fears, embracing imposter syndrome. Pybites is unique in the industry by offering a solution that teaches both the tech and the mindset side of things. This is something we’re proud of and do with passion every single day. People we work with often recognize towards the end that the mindset was the unexpected hero, and THE ingredient they actually needed the most!
  • No more tutorial hell (yes that’s possible!) – with us you’ll embrace JIT (“just in time”) learning. You’ll learn things as the need arises and only use courses and books as reference materials. A weight will fall off your shoulders and this alone will make you a more effective developer. You’ll constantly have to learn new things, and you’re expected to pick things up fast too. Working with us you’ll learn this from the get go and it’s a career transforming skill.

If you’re excited at this point at the prospect of taking your Python journey to the next level in an effective way, it’s time to take action:

1) Check out our Pybites Developer Mindset (PDM) program.

2) If you got excited about the sample projects and watching / reading what PDM alumni have gained from working with us, then apply on the page. Please provide us with as much detail so we can be prepared when we reach out to you.

Looking forward to hearing from you.

– Bob & Julian

Categories: FLOSS Project Planets

Colorfield: The state of GraphQL with Drupal 10 (part 1)

Planet Drupal - Wed, 2023-01-25 03:47
In this fist part, we will compare the key differences between the Drupal GraphQL module versions 3 and 4, where to start and various ways to write a schema with version 4.
Categories: FLOSS Project Planets

Talk Python to Me: #400: Ruff - The Fast, Rust-based Python Linter

Planet Python - Wed, 2023-01-25 03:00
Our code quality tools (linters, test frameworks, and others) play an important role in keeping our code error free and conforming to the rules our teams have chosen. But when these tools become sluggish and slow down development, we often avoid running them or even turn them off. On this episode, we have Charlie Marsh here to introduce Ruff, a fast Python linter, written in Rust. To give you a sense of what he means with fast, common Python linters can take 30-60 seconds to lint the CPython codebase. Ruff takes 300 milliseconds. I ran it on the 20,000 lines of Python code for our courses web app at Talk Python Training, and it was instantaneous. It's the kind of tool that can change how you work. I hope you're excited to learn more about it.<br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>Charlie on Twitter</b>: <a href="https://twitter.com/charliermarsh" target="_blank" rel="noopener">@charliermarsh</a><br/> <b>Charlie on Mastodon</b>: <a href="https://hachyderm.io/@charliermarsh" target="_blank" rel="noopener">@charliermarsh@hachyderm</a><br/> <b>Ruff</b>: <a href="https://github.com/charliermarsh/ruff" target="_blank" rel="noopener">github.com</a><br/> <br/> <b>PyCharm Developer Advocate Job</b>: <a href="https://talkpython.fm/pycharm-advocate-job" target="_blank" rel="noopener">jetbrains.com/careers</a><br/> <b>Watch this episode on YouTube</b>: <a href="https://www.youtube.com/watch?v=LCva0NOM2-o" target="_blank" rel="noopener">youtube.com</a><br/> <b>Episode transcripts</b>: <a href="https://talkpython.fm/episodes/transcript/400/ruff-the-fast-rust-based-python-linter" target="_blank" rel="noopener">talkpython.fm</a><br/> <br/> <b>--- Stay in touch with us ---</b><br/> <b>Subscribe to us on YouTube</b>: <a href="https://talkpython.fm/youtube" target="_blank" rel="noopener">youtube.com</a><br/> <b>Follow Talk Python on Mastodon</b>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <b>Follow Michael on Mastodon</b>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div><br/> <strong>Sponsors</strong><br/> <a href='https://talkpython.fm/cox'>Cox Automotive</a><br> <a href='https://talkpython.fm/userinterviews'>User Interviews</a><br> <a href='https://talkpython.fm/training'>Talk Python Training</a>
Categories: FLOSS Project Planets

Report of the Board of KDE e.V.

Planet KDE - Tue, 2023-01-24 22:00

The KDE e.V. does a ton of work to support the KDE Community throughout the year. In this session the board gives you some insights in the work of the e.V. and what we did over the last year to support the KDE Community.

Categories: FLOSS Project Planets

Pages