Feeds

OSI’s Response to NTIA ‘Dual Use’ RFC 3.27.2024

Open Source Initiative - Tue, 2024-04-02 14:00

March 27, 2024

Mr. Bertram Lee
National Telecommunications and Information Administration (NTIA)
U.S. Department of Commerce
1401 Constitution Avenue NW
Washington, DC 20230

RE: [Docket Number 240216-0052] Dual Use Foundation Artificial Intelligence Models with Widely Available Model Weights

Dear Mr. Lee:

The Open Source Initiative (“OSI”) appreciates the opportunity to provide our views on the above referenced matter. As steward of the Open Source Definition, the OSI sets the foundation for Open Source software, a global public good that plays a vital role in the economy and is foundational for most technology we use today. As the leading voice on the policies and principles of Open Source, the OSI helps build a world where the freedoms and opportunities of Open Source software can be enjoyed by all and supports institutions and individuals working together to create communities of practice in which the healthy Open Source ecosystem thrives. One of the most important activities of the OSI, a California public benefit 501(c)(3) organization founded in 1998, is to maintain the Open Source Definition for the good of the community.

The OSI is encouraged by the work of NTIA to bring stakeholders together to understand the lessons from the Open Source software experience in having a recognized, unified Open Source Definition that enables an ecosystem whose value is estimated to be worth $8.8 trillion. As provided below in more detail, it is essential that federal policymakers encourage Open Source AI models to the greatest extent possible, and work with organizations like the OSI which is endeavoring to create a unified, recognized definition of Open Source AI.

The Power of Open Source

Open Source delivers autonomy and personal agency to software users which enables a development method for software that harnesses the power of distributed peer review and transparency of process. The promise of Open Source is higher quality, better reliability, greater flexibility, lower cost, and an end to proprietary lock-in.

Open Source software is widely used across the federal government and in every critical infrastructure sector. “The Federal Government recognizes the immense benefits of Open Source software, which enables software development at an incredible pace and fosters significant innovation and collaboration.” For the last two decades, authoritative direction and educational resources have been given to agencies on the use, management and benefits of Open Source software.

Moreover, Open Source software has direct economic and societal benefits. Open Source software empowers companies to develop, test and deploy services, thereby substantiating market demand and economic viability. By leveraging Open Source, companies can accelerate their progress and focus on innovation. Many of the essential services and technologies of our society and economy are powered by Open Source software, including, e.g., the Internet.

The Open Source Definition has demonstrated that massive social benefits accrue when the barriers to learning, using, sharing and improving software systems are removed. The core criteria of the Open Source Definition – free redistribution; source code; derived works; integrity of the author’s source code; no discrimination against persons or groups; no discrimination against fields of endeavor; distribution of license; license must not be specific to a product; license must not restrict other software; license must be technology-neutral – have given users agency, control and self-sovereignty of their technical choices and a dynamic ecosystem based on permissionless innovation.

A recent study published by the European Commission estimated that companies located in the European Union invested around €1 billion in Open Source Software in 2018, which brought about a positive impact on the European economy of between €65 and €95 billion.

This success and the potency of Open Source software has for the last three decades relied upon the recognized unified definition of Open Source software and the list of Approved Licenses that the Open Source Initiative maintains.

OSI believes this “open” analog is highly relevant to Open Source AI as an emerging technology domain with tremendous potential for public benefit.

Distinguishing the Open Source Definition

The OSI Approved License trademark and program creates a nexus of trust around which developers, users, corporations and governments can organize cooperation on Open Source software. However, it is generally agreed that the Open Source Definition, drafted 26 years ago and maintained by the OSI, does not cover this new era of AI systems.

AI models are not just code; they are trained on massive datasets, deployed on intricate computing infrastructure, and accessed through diverse interfaces and modalities. With traditional software, there was a very clear separation between the code one wrote, the compiler one used, the binary it produced, and what license they had. However, for AI models, many components collectively influence the functioning of the system, including the algorithms, code, hardware, and datasets used for training and testing. The very notion of modifying the source code (which is important in the Open Source Definition) becomes fuzzy. For example, there is the key question of whether the training dataset, the model weights, or other key elements should be considered independently or collectively as the source code for the model/weights that have been trained.

AI (specifically the Models that it manifests) include a variety of technologies, each is a vital element to all Models.

This challenge is not new. In its guidance on use of Open Source software, the US Department of Defense distinguished open systems from open standards, that while “different from Open Source software, they are complementary and can work well together”:

Open standards make it easier for users to (later) adopt an Open Source software
program, because users of open standards aren’t locked into a particular
implementation. Instead, users who are careful to use open standards can easily
switch to a different implementation, including an OSS implementation. … Open
standards also make it easier for OSS developers to create their projects, because
the standard itself helps developers know what to do. Creating any interface is an
effort, and having a predefined standard helps reduce that effort greatly.

OSS implementations can help create and keep open standards open. An OSS
implementation can be read and modified by anyone; such implementations can
quickly become a working reference model (a “sample implementation” or an
“executable specification”) that demonstrates what the specification means
(clarifying the specification) and demonstrating how to actually implement it.
Perhaps more importantly, by forcing there to be an implementation that others can
examine in detail, resulting in better specifications that are more likely to be used.

OSS implementations can help rapidly increase adoption/use of the open standard.
OSS programs can typically be simply downloaded and tried out, making it much
easier for people to try it out and encouraging widespread use. This also pressures
proprietary implementations to limit their prices, and such lower prices for
proprietary software also encourages use of the standard.

With practically no exceptions, successful open standards for software have OSS
implementations.

Towards a Unified Vision of what is ‘Open Source AI’

With these essential differentiating elements in mind, last summer, the OSI kicked off a multi-stakeholder process to define the characteristics of an AI system that can be confidently and generally understood to be considered as “Open Source”.

This collaboration utilizes the latest definition of AI system adopted by the Organization for Economic Cooperation and Development (OECD), and which has been the foundation for NIST’s “AI Risk Management Framework” as well as the European Union’s AI Act:

An AI system is a machine-based system that, for explicit or implicit objectives,
infers, from the input it receives, how to generate outputs such as predictions,
content, recommendations, or decisions that can influence physical or virtual
environments. Different AI systems vary in their levels of autonomy and
adaptiveness after deployment.

Since its announcement last summer, the OSI has had an open call for papers and held open webinars in order to collect ideas from the community describing precise problem areas in AI and collect suggestions for solutions. More than 6 community reviews – in Europe, Africa, and various locations in the US – have taken place in 2023, coinciding with a first draft of the Open Source AI Definition. This year, the OSI has coordinated working groups to analyze various foundation models, released three more drafts of the Definition, hosted bi-weekly public town halls to review and continues to get feedback from a wide variety of stakeholders, including:

  • System Creators (makes AI system and/or component that will be studied, used, modified, or shared through an Open Source license;
  • License Creators (writes or edits the Open Source license to be applied to the AI system or component; includes compliance;
  • Regulators (writes or edits rules governing licenses and systems (e.g. government policy-maker);
  • Licensees (seeks to study, use modify, or share an Open Source AI system (e.g. AI engineer, health researcher, education researcher);
  • End Users (consumes a system output, but does not seek to study, use, modify, or share the system (e.g., student using a chatbot to write a report, artist creating an image);
  • Subjects (affected upstream or downstream by a system output without interacting with it intentionally; includes advocates for this group (e.g. people with loan denied, or content creators.
What is Open Source AI?

An Open Source AI is an AI system made available to the public under terms that grant the freedoms to:

  • Use the system for any purpose and without having to ask for permission.
  • Study how the system works and inspect its components.
  • Modify the system for any purpose, including to change its output.
  • Share the system for others to use with or without modifications, for any purpose.

Precondition to exercise these freedoms is to have access to the preferred form to make modifications to the system.

The OSI expects to wrap up and report the outcome of in-person and online meetings and anticipates having the draft endorsed by at least 5 reps for each of the stakeholder groups with a formal announcement of the results in late October.

To address the need to define rules for maintenance and review of this new Open Source AI Definition, the OSI Board of Directors approved the creation of a new committee to oversee the development of the Open Source AI Definition, approve version 1.0, and set rules for the maintenance of Definition.

Some preliminary observations based on these efforts to date:

  • It is generally recognized, as indicated above, that the Open Source Definition as created for software does not completely cover this new era of Open Source AI. This is not a software-only issue and is not something that can be solved by applying the same exact terms in the new territory of defining Open Source AI. The Open Source AI definition will start from the core motivation of the need to ensure users of AI systems retain their autonomy and personal agency.
  • To the greatest degree practical, Open Source AI should not be limited in scope, allowing users the right to adopt the technology for any purpose. One of the key lessons and underlying successes of the Open Source Definition is that field-of-use restrictions deprive creators of software to utilize tools in a way to affect positive outcomes in society.
  • Reflecting on the past 20-to-30 years of learning about what has gone well and what hasn’t in terms of the open community and the progress it has made, it’s important to understand that openness does not automatically mean ethical, right or just. Other factors such as privacy concerns and safety when developing open systems come into play, and in each element of an AI model – and when put together as a system — there is an ongoing tension between something being open and being safe, or potentially harmful.
  • Open Source AI systems lower the barrier for stakeholders outside of large tech companies to shape the future of AI, enabling more AI services to be built by and for diverse communities with different needs that big companies may not always address.
  • Similarly, Open Source AI systems make it easier for regulators and civil society to assess AI systems for compliance with laws protecting civil rights, privacy, consumers, and workers. They increase transparency, education, testing and trust around the use of AI, enabling researchers and journalists to audit and write about AI systems’ impacts on society.
  • Open source AI systems advance safety and security by accelerating the understanding of their capabilities, risks and harms through independent research, collaboration, and knowledge sharing.
  • Open source AI systems promote economic growth by lowering the barrier for innovators, startups, and small businesses from more diverse communities to build and use AI. Open models also help accelerate scientific research because they can be less expensive, easier to fine-tune, and supportive of reproducible research.

The OSI looks forward to working with NTIA as it considers the comments to this RFI, and stands ready to participate in any follow on discussions to this or the general topic of ‘Dual Use Foundation Artificial Intelligence Models With Widely Available Model Weights’. As shared above, it is essential that federal policymakers encourage Open Source AI models to the greatest extent possible, and work with organizations like the OSI and others who are endeavoring to create a unified, recognized definition of Open Source AI.

Respectfully submitted,
THE OPEN SOURCE INITIATIVE


For more information, contact:

  • Stefano Maffulli, Executive Director
  • Deb Bryant, US Policy Director

 

Categories: FLOSS Research

Drupal Association Journey: Pedro Cambra: Survey on Bookmarking Tool Needs Your Input

Planet Drupal - Tue, 2024-04-02 13:06

TL;DR: I’m requesting members of the Drupal community to help my research about the need for a bookmarking tool by responding a super quick survey.

As part of my dissertation work for my bachelor’s degree, I’m unsurprisingly working in something related to Drupal. After a lot of consideration regarding a project that could be within a reasonable scope but also allowed me to contribute a little bit to the Drupal ecosystem, a chat with Cristina and Christian helped me decide to work in the shortcut module, and try to make improvements before it is marked to be removed to core – and try to avoid that because I believe it could be a useful tool for both the navigation and the dashboards initiatives.

But first things first.

One of the elements I am looking to explore the most in my research is the full process of the contribution, from identifying the issue to solve, get quantitative data through a survey in the community to establish that the problem is worth solving it, then propose a solution and get feedback on it.

I would appreciate it a lot if you could help me achieve my goal by answering the survey I’ve prepared.

#Drupal

Discuss...

Categories: FLOSS Project Planets

Bits from Debian: Bits from the DPL

Planet Debian - Tue, 2024-04-02 13:00

Dear Debianites

This morning I decided to just start writing Bits from DPL and send whatever I have by 18:00 local time. Here it is, barely proof read, along with all it's warts and grammar mistakes! It's slightly long and doesn't contain any critical information, so if you're not in the mood, don't feel compelled to read it!

== Get ready for a new DPL! ==

Soon, the voting period will start to elect our next DPL, and my time as DPL will come to an end. Reading the questions posted to the new candidates on [debian-vote], it takes quite a bit of restraint to not answer all of them myself, I think I can see how that aspect contributed to me being reeled in to running for DPL! In total I've done so 5 times (the first time I ran, Sam was elected!).

Good luck to both [Andreas] and [Sruthi], our current DPL candidates! I've already started working on preparing handover, and there's multiple request from teams that have came in recently that will have to wait for the new term, so I hope they're both ready to hit the ground running!

  • [debian-vote] Mailing list: https://lists.debian.org/debian-vote/2024/03/threads.html
  • Platform: https://www.debian.org/vote/2024/platforms/tille [Anrea]
  • Platform: https://www.debian.org/vote/2024/platforms/srud [Sruthi]

== Things that I wish could have gone better ==

  • Communication:

Recently, I saw a t-shirt that read:

Adulthood is saying, 'But after this week things will slow down a bit' over and over until you die.

I can relate! With every task, crisis or deadline that appears, I think that once this is over, I'll have some more breathing space to get back to non-urgent, but important tasks. "Bits from the DPL" was something I really wanted to get right this last term, and clearly failed spectacularly. I have two long Bits from the DPL drafts that I never finished, I tend to have prioritised problems of the day over communication. With all the hindsight I have, I'm not sure which is better to prioritise, I do rate communication and transparency very highly and this is really the top thing that I wish I could've done better over the last four years.

On that note, thanks to people who provided me with some kind words when I've mentioned this to them before. They pointed out that there are many other ways to communicate and be in touch with the community, and they mentioned that they thought that I did a good job with that.

Since I'm still on communication, I think we can all learn to be more effective at it, since it's really so important for the project. Every time I publicly spoke about us spending more money, we got more donations. People out there really like to see how we invest funds in to Debian, instead of just making it heap up. DSA just spent a nice chunk on money on hardware, but we don't have very good visibility on it. It's one thing having it on a public line item in SPI's reporting, but it would be much more exciting if DSA could provide a write-up on all the cool hardware they're buying and what impact it would have on developers, and post it somewhere prominent like debian-devel-announce, Planet Debian or Bits from Debian (from the publicity team).

I don't want to single out DSA there, it's difficult and affects many other teams. The Salsa CI team also spent a lot of resources (time and money wise) to extend testing on AMD GPUs and other AMD hardware. It's fantastic and interesting work, and really more people within the project and in the outside world should know about it!

I'm not going to push my agendas to the next DPL, but I hope that they continue to encourage people to write about their work, and hopefully at some point we'll build enough excitement in doing so that it becomes a more normal part of our daily work.

  • Founding Debian as a standalone entity:

This was my number one goal for the project this last term, which was a carried over item from my previous terms.

I'm tempted to write everything out here, including the problem statement and our current predicaments, what kind of ground work needs to happen, likely constitutional changes that need to happen, and the nature of the GR that would be needed to make such a thing happen, but if I start with that, I might not finish this mail.

In short, I 100% believe that this is still a very high ranking issue for Debian, and perhaps after my term I'd be in a better position to spend more time on this (hmm, is this an instance of "The grass is always better on the other side", or "Next week will go better until I die?"). Anyway, I'm willing to work with any future DPL on this, and perhaps it can in itself be a delegation tasked to properly explore all the options, and write up a report for the project that can lead to a GR.

Overall, I'd rather have us take another few years and do this properly, rather than rush into something that is again difficult to change afterwards. So while I very much wish this could've been achieved in the last term, I can't say that I have any regrets here either.

== My terms in a nutshell ==

  • COVID-19 and Debian 11 era:

My first term in 2020 started just as the COVID-19 pandemic became known to spread globally. It was a tough year for everyone, and Debian wasn't immune against its effects either. Many of our contributors got sick, some have lost loved ones (my father passed away in March 2020 just after I became DPL), some have lost their jobs (or other earners in their household have) and the effects of social distancing took a mental and even physical health toll on many. In Debian, we tend to do really well when we get together in person to solve problems, and when DebConf20 got cancelled in person, we understood that that was necessary, but it was still more bad news in a year we had too much of it already.

I can't remember if there was ever any kind of formal choice or discussion about this at any time, but the DebConf video team just kind of organically and spontaneously became the orga team for an online DebConf, and that lead to our first ever completely online DebConf. This was great on so many levels. We got to see each other's faces again, even though it was on screen. We had some teams talk to each other face to face for the first time in years, even though it was just on a Jitsi call. It had a lasting cultural change in Debian, some teams still have video meetings now, where they didn't do that before, and I think it's a good supplement to our other methods of communication.

We also had a few online Mini-DebConfs that was fun, but DebConf21 was also online, and by then we all developed an online conference fatigue, and while it was another good online event overall, it did start to feel a bit like a zombieconf and after that, we had some really nice events from the Brazillians, but no big global online community events again. In my opinion online MiniDebConfs can be a great way to develop our community and we should spend some further energy into this, but hey! This isn't a platform so let me back out of talking about the future as I see it...

Despite all the adversity that we faced together, the Debian 11 release ended up being quite good. It happened about a month or so later than what we ideally would've liked, but it was a solid release nonetheless. It turns out that for quite a few people, staying inside for a few months to focus on Debian bugs was quite productive, and Debian 11 ended up being a very polished release.

During this time period we also had to deal with a previous Debian Developer that was expelled for his poor behaviour in Debian, who continued to harass members of the Debian project and in other free software communities after his expulsion. This ended up being quite a lot of work since we had to take legal action to protect our community, and eventually also get the police involved. I'm not going to give him the satisfaction by spending too much time talking about him, but you can read our official statement regarding Daniel Pocock here:

https://www.debian.org/News/2021/20211117

In late 2021 and early 2022 we also discussed our general resolution process, and had two consequent votes to address some issues that have affected past votes:

  • https://www.debian.org/vote/2021/vote_003
  • https://www.debian.org/vote/2022/vote_001

In my first term I addressed our delegations that were a bit behind, by the end of my last term all delegation requests are up to date. There's still some work to do, but I'm feeling good that I get to hand this over to the next DPL in a very decent state. Delegation updates can be very deceiving, sometimes a delegation is completely re-written and it was just 1 or 2 hours of work. Other times, a delegation updated can contain one line that has changed or a change in one team member that was the result of days worth of discussion and hashing out differences.

I also received quite a few requests either to host a service, or to pay a third-party directly for hosting. This was quite an admin nightmare, it either meant we had to manually do monthly reimbursements to someone, or have our TOs create accounts/agreements at the multiple providers that people use. So, after talking to a few people about this, we founded the DebianNet team (we could've admittedly chosen a better name, but that can happen later on) for providing hosting at two different hosting providers that we have agreement with so that people who host things under debian.net have an easy way to host it, and then at the same time Debian also has more control if a site maintainer goes MIA.

More info:

https://wiki.debian.org/Teams/DebianNet

You might notice some Openstack mentioned there, we had some intention to set up a Debian cloud for hosting these things, that could also be used for other additional Debiany things like archive rebuilds, but these have so far fallen through. We still consider it a good idea and hopefully it will work out some other time (if you're a large company who can sponsor few racks and servers, please get in touch!)

  • DebConf22 and Debian 12 era:

DebConf22 was the first time we returned to an in-person DebConf. It was a bit smaller than our usual DebConf - understandably so, considering that there were still COVID risks and people who were at high risk or who had family with high risk factors did the sensible thing and stayed home.

After watching many MiniDebConfs online, I also attended my first ever MiniDebConf in Hamburg. It still feels odd typing that, it feels like I should've been at one before, but my location makes attending them difficult (on a side-note, a few of us are working on bootstrapping a South African Debian community and hopefully we can pull off MiniDebConf in South Africa later this year).

While I was at the MiniDebConf, I gave a talk where I covered the evolution of firmware, from the simple e-proms that you'd find in old printers to the complicated firmware in modern GPUs that basically contain complete operating systems- complete with drivers for the device their running on. I also showed my shiny new laptop, and explained that it's impossible to install that laptop without non-free firmware (you'd get a black display on d-i or Debian live). Also that you couldn't even use an accessibility mode with audio since even that depends on non-free firmware these days.

Steve, from the image building team, has said for a while that we need to do a GR to vote for this, and after more discussion at DebConf, I kept nudging him to propose the GR, and we ended up voting in favour of it. I do believe that someone out there should be campaigning for more free firmware (unfortunately in Debian we just don't have the resources for this), but, I'm glad that we have the firmware included. In the end, the choice comes down to whether we still want Debian to be installable on mainstream bare-metal hardware.

At this point, I'd like to give a special thanks to the ftpmasters, image building team and the installer team who worked really hard to get the changes done that were needed in order to make this happen for Debian 12, and for being really proactive for remaining niggles that was solved by the time Debian 12.1 was released.

The included firmware contributed to Debian 12 being a huge success, but it wasn't the only factor. I had a list of personal peeves, and as the hard freeze hit, I lost hope that these would be fixed and made peace with the fact that Debian 12 would release with those bugs. I'm glad that lots of people proved me wrong and also proved that it's never to late to fix bugs, everything on my list got eliminated by the time final freeze hit, which was great! We usually aim to have a release ready about 2 years after the previous release, sometimes there are complications during a freeze and it can take a bit longer. But due to the excellent co-ordination of the release team and heavy lifting from many DDs, the Debian 12 release happened 21 months and 3 weeks after the Debian 11 release. I hope the work from the release team continues to pay off so that we can achieve their goals of having shorter and less painful freezes in the future!

Even though many things were going well, the ongoing usr-merge effort highlighted some social problems within our processes. I started typing out the whole history of usrmerge here, but it's going to be too long for the purpose of this mail. Important questions that did come out of this is, should core Debian packages be team maintained? And also about how far the CTTE should really be able to override a maintainer. We had lots of discussion about this at DebConf22, but didn't make much concrete progress. I think that at some point we'll probably have a GR about package maintenance. Also, thank you to Guillem who very patiently explained a few things to me (after probably having have to done so many times to others before already) and to Helmut who have done the same during the MiniDebConf in Hamburg. I think all the technical and social issues here are fixable, it will just take some time and patience and I have lots of confidence in everyone involved.

UsrMerge wiki page: https://wiki.debian.org/UsrMerge

  • DebConf 23 and Debian 13 era:

DebConf23 took place in Kochi, India. At the end of my Bits from the DPL talk there, someone asked me what the most difficult thing I had to do was during my terms as DPL. I answered that nothing particular stood out, and even the most difficult tasks ended up being rewarding to work on. Little did I know that my most difficult period of being DPL was just about to follow. During the day trip, one of our contributors, Abraham Raji, passed away in a tragic accident. There's really not anything anyone could've done to predict or stop it, but it was devastating to many of us, especially the people closest to him. Quite a number of DebConf attendees went to his funeral, wearing the DebConf t-shirts he designed as a tribute. It still haunts me when I saw his mother scream "He was my everything! He was my everything!", this was by a large margin the hardest day I've ever had in Debian, and I really wasn't ok for even a few weeks after that and I think the hurt will be with many of us for some time to come. So, a plea again to everyone, please take care of yourself! There's probably more people that love you than you realise.

A special thanks to the DebConf23 team, who did a really good job despite all the uphills they faced (and there were many!).

As DPL, I think that planning for a DebConf is near to impossible, all you can do is show up and just jump into things. I planned to work with Enrico to finish up something that will hopefully save future DPLs some time, and that is a web-based DD certificate creator instead of having the DPL do so manually using LaTeX. It already mostly works, you can see the work so far by visiting https://nm.debian.org/person/ACCOUNTNAME/certificate/ and replacing ACCOUNTNAME with your Debian account name, and if you're a DD, you should see your certificate. It still needs a few minor changes and a DPL signature, but at this point I think that will be finished up when the new DPL start. Thanks to Enrico for working on this!

Since my first term, I've been trying to find ways to improve all our accounting/finance issues. Tracking what we spend on things, and getting an annual overview is hard, especially over 3 trusted organisations. The reimbursement process can also be really tedious, especially when you have to provide files in a certain order and combine them into a PDF. So, at DebConf22 we had a meeting along with the treasurer team and Stefano Rivera who said that it might be possible for him to work on a new system as part of his Freexian work. It worked out, and Freexian funded the development of the system since then, and after DebConf23 we handled the reimbursements for the conference via the new reimbursements site:

https://reimbursements.debian.net

It's still early days, but over time it should be linked to all our TOs and we'll use the same category codes across the board. So, overall, our reimbursement process becomes a lot simpler, and also we'll be able to get information like how much money we've spent on any category in any period. It will also help us to track how much money we have available or how much we spend on recurring costs. Right now that needs manual polling from our TOs. So I'm really glad that this is a big long-standing problem in the project that is being fixed.

For Debian 13, we're waving goodbye to the KFreeBSD and mipsel ports. But we're also gaining riscv64 and loongarch64 as release architectures! I have 3 different RISC-V based machines on my desk here that I haven't had much time to work with yet, you can expect some blog posts about them soon after my DPL term ends!

As Debian is a unix-like system, we're affected by the [Year 2038 problem], where systems that uses 32 bit time in seconds since 1970 run out of available time and will wrap back to 1970 or have other undefined behaviour. A detailed [wiki page] explains how this works in Debian, and currently we're going through a rather large transition to make this possible.

[Year 2038 problem] https://simple.wikipedia.org/wiki/Year_2038_problem [wiki page] https://wiki.debian.org/ReleaseGoals/64bit-time

I believe this is the right time for Debian to be addressing this, we're still a bit more than a year away for the Debian 13 release, and this provides enough time to test the implementation before 2038 rolls along.

Of course, big complicated transitions with dependency loops that causes chaos for everyone would still be too easy, so this past weekend (which is a holiday period in most of the west due to Easter weekend) has been filled with dealing with an upstream bug in xz-utils, where a backdoor was placed in this key piece of software. An [Ars Technica] covers it quite well, so I won't go into all the details here. I mention it because I want to give yet another special thanks to everyone involved in dealing with this on the Debian side. Everyone involved, from the ftpmasters to security team and others involved were super calm and professional and made quick, high quality decisions. This also lead to the archive being frozen on Saturday, this is the first time I've seen this happen since I've been a DD, but I'm sure next week will go better!

[Ars Technica] https://arstechnica.com/security/2024/04/what-we-know-about-the-xz-utils-backdoor-that-almost-infected-the-world/

== Looking forward ==

It's really been an honour for me to serve as DPL. It might well be my biggest achievement in my life. Previous DPLs range from prominent software engineers to game developers, or people who have done things like complete Iron Man, run other huge open source projects and are part of big consortiums. Ian Jackson even authored dpkg and is now working on the very interesting [tag2upload service]!

[tag2upload service] https://peertube.debian.social/w/pav68XBWdurWzfTYvDgWRM

I'm a relative nobody, just someone who grew up as a poor kid in South Africa, who just really cares about Debian a lot. And, above all, I'm really thankful that I didn't do anything major to screw up Debian for good.

Not unlike learning how to use Debian, and also becoming a Debian Developer, I've learned a lot from this and it's been a really valuable growth experience for me.

I know I can't possible give all the thanks to everyone who deserves it, so here's a big big thanks to everyone who have worked so hard and who have put in many, many hours to making Debian better, I consider you all heroes!

-Jonathan

Categories: FLOSS Project Planets

Real Python: Python Deep Learning: PyTorch vs Tensorflow

Planet Python - Tue, 2024-04-02 10:00

PyTorch vs TensorFlow: What’s the difference? Both are open source Python libraries that use graphs to perform numerical computation on data. Both are used extensively in academic research and commercial code. Both are extended by a variety of APIs, cloud computing platforms, and model repositories.

If they’re so similar, then which one is best for your project?

In this video course, you’ll learn:

  • What the differences are between PyTorch and TensorFlow
  • What tools and resources are available for each
  • How to choose the best option for your specific use case

You’ll start by taking a close look at both platforms, beginning with the slightly older TensorFlow, before exploring some considerations that can help you determine which choice is best for your project. Let’s get started!

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Matt Glaman: Ensuring smart_date works for all versions of Drupal 10 and 11

Planet Drupal - Tue, 2024-04-02 09:18

At MidCamp a few weeks ago, Martin Anderson-Clutz tapped me on the shoulder to check out a Smart Date issue for compatibility with Drupal 10.2. As of Drupal 10.2, ListItemBase::extractAllowedValues takes an array as its first argument versus a string. The method used to explode a newline separated string into an array for its callers. I took a look at the issue. The change affected the parseValues method in the SmartDateListItemBase class. The parseValues method takes the field's values and passes them to extractAllowedValues, the method with a changed signature in Drupal 11.

The original method contained the following:

Categories: FLOSS Project Planets

Anwesha Das: Opening up Ansible release to the community

Planet Python - Tue, 2024-04-02 08:52

Transparency, collaboration, inclusivity, and openness lay the foundation of the Open Source community. As the project&aposs maintainers, few of our tasks make the entry bar of contribution low, collaboration easy, and the governance model fair. Ansible Community Engineering Team always thrives on these purposes through our different endeavors.

Ansible has historically been released by Red Hat employees. We planned to open up the release to the community. And I was asked about that. My primary goal was releasing Ansible, which should be dull and possible for the community. This was my first time dealing with Github actions. There is still a lot to learn. But we are there now.

The Release Management working group started releasing the Ansible Community package using GitHub Actions workflow from Ansible version 9.3.0 . The recent 9.4.0 release has also been released following the same workflow.

Thank you Felix Fontein, Maxwell G, Sviatoslav Sydorenko and Toshio for helping out in shaping the workflow with you valuable feedback, doing the actual release and giving answers to my enumerable queries.

Categories: FLOSS Project Planets

EuroPython: EuroPython 2024: Ticket sales now open! 🐍

Planet Python - Tue, 2024-04-02 08:39

Hey hey, everyone,

We are thrilled to announce that EuroPython is back and better than ever! ✨
EuroPython 2024 will be held 8-14 July at the Prague Congress Centre (PCC), Czech Republic. Details of how to participate remotely will be published soon.

The conference will follow the same structure as the previous editions:

  • Two Workshop/Tutorial Days (8-9 July, Mon-Tue)
  • Three Conference Days (10-12 July, Wed-Fri)
  • Sprint Weekend (13-14 July, Sat-Sun)

Secure your spot at EuroPython 2024 by purchasing your tickets today. For more information and to grab your tickets, visit https://ep2024.europython.eu/tickets before they sell out!

Get your tickets fast before the late-bird prices kick in. 🏃

Looking forward to welcoming you to EuroPython 2024 in Prague! 🇨🇿

🎫Don&apost forget to get your ticket at - https://ep2024.europython.eu

Cheers,

The EuroPython 2024 Organisers

Categories: FLOSS Project Planets

Qt 6.7 Released!

Planet KDE - Tue, 2024-04-02 05:54

Qt 6.7 is out with lots of large and small improvements for all of us who like to have fun when building modern applications and user experiences.

Categories: FLOSS Project Planets

Specbee: How to Write Your First Test Case Using PHPUnit & Kernel in Drupal

Planet Drupal - Tue, 2024-04-02 04:07
Are you able to imagine a world where your code functions flawlessly, your bugs are scared of your testing routine and your users can enjoy a seamless experience - free from crashes and errors? Well, this only means that you understand the importance of automated testing.  With automated testing, Drupal developers can elevate the code quality, streamline workflows, and fortify digital ecosystems against errors and bugs. Drupal offers 4 types of PHPUnit tests: Unit, Kernel, Functional, and Functional Javascript. In this blog post, we'll explore PHPUnit tests and Kernel tests. Setting Up PHPUnit in Drupal For setting up PHPUnit in Drupal, the recommended method by Drupal is composer-based: $ composer require --dev phpunit/phpunit --with-dependencies $ composer require behat/mink && composer require --dev phpspec/prophecy(Note  -  PHPUnit version 11 requires PHP 8.2, This blog is written for Drupal 10 and PHP 8.1)Once PHPUnit and its dependencies are installed, the next step involves creating and configuring the phpunit.xml file. Locate the phpunit.xml.dist file in the core folder of Drupal installation. Copy and paste this file into the docroot directory and rename it to phpunit.xml (it is recommended to keep the file in docroot directory instead of core so that it won't get affected by core updates). Create simpletest and browser_output directories. In order to run tests like Kernel and Functional we need to create these two directories and set the permissions to writable $ mkdir -p docroot/sites/simpletest/browser_output && chmod -R 777 docroot/sites/simpletestTo run the test locally, we need to configure some values in phpunit.xml e.g, Change 1 - <env name="SIMPLETEST_BASE_URL" value="" /> to <env name="SIMPLETEST_BASE_URL" value="https://yoursiteurl.com" /> 2 - <env name="SIMPLETEST_DB" value="" /> to <env name="SIMPLETEST_DB" value="mysql://username:password@yourdbhost/databasename" /> 3 - <env name="BROWSERTEST_OUTPUT_DIRECTORY" value="" /> to <env name="BROWSERTEST_OUTPUT_DIRECTORY" value="fullpath/docroot/sites/simpletest/browser_output" /> (Note - To check the full path of your app run from project root) $ pwdSetting Up PHPUnit with Lando If you want to run your test from lando you’ll need to configure .lando.yml file. We’ll leave the defaults for most of the values in the phpunit.xml file except for where to find the bootstrap.php file. This should be changed to the path in the Lando container, which will be /app/web/core/tests/bootstrap.php. This can be done with sed: $ sed -i 's|tests\/bootstrap\.php|/app/web/core/tests/bootstrap.php|g' phpunit.xml Next, edit the .lando.yml file and add the following: services:  appserver:    overrides:      environment:        SIMPLETEST_BASE_URL: "http://mysite.lndo.site"        SIMPLETEST_DB: "mysql://database:database@database/database"        MINK_DRIVER_ARGS_WEBDRIVER: '["chrome", {"browserName":"chrome","chromeOptions":{"args":["--disable-gpu","--headless"]}}, "http://chrome:9515"]'  chrome:    type: compose    services:      image: drupalci/webdriver-chromedriver:production      command: chromedriver --log-path=/tmp/chromedriver.log --verbose --whitelisted-ips= tooling:  test:    service: appserver    cmd: "php /app/vendor/bin/phpunit -c /app/phpunit.xml"Modify SIMPLETEST_BASE_URL and SIMPLETEST_DB to point to your lando site and database credentials as needed.This does three things: Adds environment variables to the appserver container (the one we’ll run the tests in). Adds a new container for the chromedriver image which is used for running headless javascript tests (more on that later). A tooling section that adds a test command to Lando to run our tests. Important!After updating the .lando.yml file we need to rebuild the containers with the following:$ lando rebuild -yLets run a single test with lando $ lando test core/modules/datetime/tests/src/Unit/Plugin/migrate/field/DateFiedTest.phpWithout Lando — Verify the tests are working by running core test.(Note — I keep my phpunit.xml file in docroot folder and will be running test from docroot directory.)$ ../vendor/bin/phpunit -c core core/modules/datetime/tests/src/Unit/Plugin/migrate/field/DateFiedTest.php What is a PHPUnit Test PHPUnit tests are utilized to test small blocks of code and functionalities that do not necessitate a complete Drupal installation. These tests allow us to evaluate the functionality of a class within the Drupal environment, encompassing aspects like Database, Settings, etc. Moreover, they do not require a web browser, as the Drupal environment can be substituted by a "mock" object. Before writing unit tests, we should remember the following things: Base Class — \Drupal\Tests\UnitTestCaseTo implement a unit test case we need to extend our test class with the base classNamespace — \Drupal\Tests\mymodule\Unit (or subdirectory)We need to specify a namespace for our testDirectory location — mymodule/tests/src/Unit (or subdirectory)To run the test, the test class must reside in the above-mentioned directory. Write Your First PHPUnit Test Step 1: Create a custom module Step 2: Create the event_example.info.yml file for the custom module name: Events Exampletype: moduledescription: Provides an example of subscribing to and dispatching events.package: Customcore_version_requirement: ^9.4 || ^10 Step 3: Create event_example.services.yml file services: # Give your service a unique name, convention is to prefix service names with # the name of the module that implements them. events_example_subscriber:  # Point to the class that will contain your implementation of  # \Symfony\Component\EventDispatcher\EventSubscriberInterface  class: Drupal\events_example\EventSubscriber\EventsExampleSubscriber  tags:  - {name: event_subscriber}Step 4: Create events_example/src/EventSubscriber/EvensExampleSubscriber.php file for our class. <?php namespace Drupal\events_example\EventSubscriber; use Drupal\events_example\Event\IncidentEvents; use Drupal\events_example\Event\IncidentReportEvent; use Drupal\Core\Messenger\MessengerTrait; use Drupal\Core\StringTranslation\StringTranslationTrait; use Symfony\Component\EventDispatcher\EventSubscriberInterface; /** * Subscribe to IncidentEvents::NEW_REPORT events and react to new reports. * * In this example we subscribe to all IncidentEvents::NEW_REPORT events and * point to two different methods to execute when the event is triggered. In * each method we have some custom logic that determines if we want to react to * the event by examining the event object, and the displaying a message to the * user indicating whether or not that method reacted to the event. * * By convention, classes subscribing to an event live in the * Drupal/{module_name}/EventSubscriber namespace. * * @ingroup events_example */ class EventsExampleSubscriber implements EventSubscriberInterface {  use StringTranslationTrait;  use MessengerTrait;  /**   * {@inheritdoc}   */  public static function getSubscribedEvents() {    // Return an array of events that you want to subscribe to mapped to the    // method on this class that you would like called whenever the event is    // triggered. A single class can subscribe to any number of events. For    // organization purposes it's a good idea to create a new class for each    // unique task/concept rather than just creating a catch-all class for all    // event subscriptions.    //    // See EventSubscriberInterface::getSubscribedEvents() for an explanation    // of the array's format.    //    // The array key is the name of the event your want to subscribe to. Best    // practice is to use the constant that represents the event as defined by    // the code responsible for dispatching the event. This way, if, for    // example, the string name of an event changes your code will continue to    // work. You can get a list of event constants for all events triggered by    // core here:    // https://api.drupal.org/api/drupal/core%21core.api.php/group/events/8.2.x.    //    // Since any module can define and trigger new events there may be    // additional events available in your application. Look for classes with    // the special @Event docblock indicator to discover other events.    //    // For each event key define an array of arrays composed of the method names    // to call and optional priorities. The method name here refers to a method    // on this class to call whenever the event is triggered.    $events[IncidentEvents::NEW_REPORT][] = ['notifyMario'];    // Subscribers can optionally set a priority. If more than one subscriber is    // listening to an event when it is triggered they will be executed in order    // of priority. If no priority is set the default is 0.    $events[IncidentEvents::NEW_REPORT][] = ['notifyBatman', -100];    // We'll set an event listener with a very low priority to catch incident    // types not yet defined. In practice, this will be the 'cat' incident.    $events[IncidentEvents::NEW_REPORT][] = ['notifyDefault', -255];    return $events;  }  /**   * If this incident is about a missing princess, notify Mario.   *   * Per our configuration above, this method is called whenever the   * IncidentEvents::NEW_REPORT event is dispatched. This method is where you   * place any custom logic that you want to perform when the specific event is   * triggered.   *   * These responder methods receive an event object as their argument. The   * event object is usually, but not always, specific to the event being   * triggered and contains data about application state and configuration   * relative to what was happening when the event was triggered.   *   * For example, when responding to an event triggered by saving a   * configuration change you'll get an event object that contains the relevant   * configuration object.   *   * @param \Drupal\events_example\Event\IncidentReportEvent $event   *   The event object containing the incident report.   */  public function notifyMario(IncidentReportEvent $event) {    // You can use the event object to access information about the event passed    // along by the event dispatcher.    if ($event->getType() == 'stolen_princess') {      $this->messenger()->addStatus($this->t('Mario has been alerted. Thank you. This message was set by an event subscriber. See @method()', ['@method' => __METHOD__]));      // Optionally use the event object to stop propagation.      // If there are other subscribers that have not been called yet this will      // cause them to be skipped.      $event->stopPropagation();    }  }  /**   * Let Batman know about any events involving the Joker.   *   * @param \Drupal\events_example\Event\IncidentReportEvent $event   *   The event object containing the incident report.   */  public function notifyBatman(IncidentReportEvent $event) {    if ($event->getType() == 'joker') {      $this->messenger()->addStatus($this->t('Batman has been alerted. Thank you. This message was set by an event subscriber. See @method()', ['@method' => __METHOD__]));      $event->stopPropagation();    }  }  /**   * Handle incidents not handled by the other handlers.   *   * @param \Drupal\events_example\Event\IncidentReportEvent $event   *   The event object containing the incident report.   */  public function notifyDefault(IncidentReportEvent $event) {    $this->messenger()->addStatus($this->t('Thank you for reporting this incident. This message was set by an event subscriber. See @method()', ['@method' => __METHOD__]));    $event->stopPropagation();  }  /**   * @param $string String   *   Simple function to check string.   */  public function checkString($string) {    return $string ? TRUE : FALSE;  } }Step 5: Create tests/src/Unit/EventsExampleUnitTest.php file for our UnitTest <?php namespace Drupal\Tests\events_example\Unit; use Drupal\Tests\UnitTestCase; use Drupal\events_example\EventSubscriber\EventsExampleSubscriber; /** * Test events_example EventsExampleSubscriber functionality * * @group events_example * * @ingroup events_example */ class EventsExampleUnitTest extends UnitTestCase {    /**     * event_example EventsExampleSubscriber object.     *     * @var-object     */    protected $eventExampleSubscriber;    /**     * {@inheritdoc}     */     protected function setUp(): void {        $this->eventExampleSubscriber = new EventsExampleSubscriber();        parent::setUp();     }    /**     * Test simple function that returns true if string is present.     */    public function testHasString() {      $this->assertEqual(TRUE, $this->eventExampleSubscriber->checkString('Some String'));    }  }Important Notes: Test class name should start or end with “Test”. For example, EventsExampleUnitTest and it should extend with the base class UnitTestCase. Test function should start with “test”. For example, testHasString otherwise it will not be included in test run. Step 6: Let’s run our test! $ lando test modules/custom/events_example/tests/src/Unit/EventsExampleUnitTest.phpor$ ../vender/bin/phpunit -c core modules/custom/events_example/tests/src/Unit/EventsExampleUnitTest.php PHPUnit 9.6.17 by Sebastian Bergmann and contributors. Testing Drupal\Tests\events_example\Unit\EventsExampleUnitTest.                                                                   1 / 1 (100%) Time: 00:00.054, Memory: 10.00 MB OK (1 test, 1 assertion) Hooray! Our test has passed! Write Your First Kernel Test These tests necessitate specific Drupal environment dependencies, with no requirement for web browsers. They allow us to assess class functionality without the full Drupal setup or web browsers. However, certain Drupal environment dependencies are indispensable and cannot be easily mocked. Kernel tests are capable of accessing services, databases, and minimal file systems.Below are the essential prerequisites to consider when running kernel tests. Base Class — \Drupal\KernelTests\KernelTestBaseTo implement the Kernel test we need to extend our test class with the base classNamespace — \Drupal\Tests\mymodule\Kernel (or subdirectory)We need to specify a namespace for our testDirectory location — mymodule/tests/src/Kernel (or subdirectory)To run the test, the test class must reside in the above-mentioned directory.Create tests/src/Kernel/EventsExampleServiceTest.php file for our Kernel Test <?php namespace Drupal\Tests\events_example\Kernel; use Drupal\KernelTests\KernelTestBase; use Drupal\events_example\EventSubscriber\EventsExampleSubscriber; /** * Test to ensure 'events_example_subscriber' service is reachable. * * @group events_example * * @ingroup events_example */ class EventsExampleServiceTest extends KernelTestBase {  /**   * {@inheritdoc}   */  protected static $modules = ['events_example'];  /**   * Test for existence of 'events_example_subscriber' service.   */  public function testEventsExampleService() {    $subscriber = $this->container->get('events_example_subscriber');    $this->assertInstanceOf(EventsExampleSubscriber::class, $subscriber);  } }Important Notes: The test class name should start or end with “Test”, for example, EventsExampleServiceTest and it should extend with base class KernelTestBase.  The kernel test should include modules that have dependencies. For example , “$modules = [‘events_example’]” we can include other dependent modules here like “$modules = [‘events_example’, ‘user’, ‘field’]” . It acts like dependency injection. The test function should start with “test”, for example, testEventsExampleService otherwise it will not be included in the test run.Let's run our Kernel Test.$ lando test modules/custom/events_example/tests/src/Kernel/EventsExampleServiceTest.phpor$ ../vender/bin/phpunit -c core modules/custom/events_example/tests/src/Kernel/EventsExampleServiceTest.php PHPUnit 9.6.17 by Sebastian Bergmann and contributors. Testing Drupal\Tests\events_example\Kernel\EventsExampleServiceTest.                                                                   1 / 1 (100%) Time: 01:24.129, Memory: 10.00 MB OK (1 test, 1 assertion) Woohoo! Another successful test in the books! Final Thoughts What next after writing your first test case using PHPUnit and Kernel testing? Well, you can proceed to write more test cases to cover other functionalities or edge cases in your Drupal project. Additionally, you may want to consider integrating your test suite into a continuous integration(CI) pipeline to automate testing and ensure code quality throughout your development process.
Categories: FLOSS Project Planets

Python Bytes: #377 A Dramatic Episode

Planet Python - Tue, 2024-04-02 04:00
<strong>Topics covered in this episode:</strong><br> <ul> <li><a href="https://github.com/epogrebnyak/justpath"><strong>justpath</strong></a></li> <li><strong>xz back door</strong></li> <li><a href="https://lpython.org">LPython</a></li> <li><a href="https://github.com/treyhunner/dramatic"><strong>dramatic</strong></a></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=eWnYlxOREu4' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="377">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by ScoutAPM: <a href="https://pythonbytes.fm/scout"><strong>pythonbytes.fm/scout</strong></a></p> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy"><strong>@mkennedy@fosstodon.org</strong></a></li> <li>Brian: <a href="https://fosstodon.org/@brianokken"><strong>@brianokken@fosstodon.org</strong></a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes"><strong>@pythonbytes@fosstodon.org</strong></a></li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually Tuesdays at 11am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of </p> <p>the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1:</strong> <a href="https://github.com/epogrebnyak/justpath"><strong>justpath</strong></a></p> <ul> <li>Inspect and refine PATH environment variable on both Windows and Linux.</li> <li>Raw, count, duplicates, invalids, corrections, excellent stuff.</li> <li>Check out <a href="https://asciinema.org/a/642726">the video</a></li> </ul> <p><strong>Brian #2:</strong> <strong>xz back door</strong></p> <ul> <li>In case you kinda heard about this, but not really.</li> <li>Very short version: <ul> <li>A Microsoft engineer noticed a performance problem with ssh and tracked it to a particular version update of xz.</li> <li>Further investigations found a multi-year installation of a fairly complex back door into the xz by a new-ish contributor. But still contributing over several years. First commit in early 2022.</li> <li>The problem is caught. But if it had succeeded, it would have been bad.</li> <li>Part of the issue of how this happened is due to having one primary maintainer on a very widely used tool included in tons-o-Linux distributions.</li> </ul></li> <li>Some useful articles <ul> <li><a href="https://boehs.org/node/everything-i-know-about-the-xz-backdoor"><strong>Everything I Know About the XZ Backdoor</strong></a> - Evan Boehs - recommended read</li> </ul></li> <li>Don’t think your affected? Think again if you use homebrew, for example: <ul> <li><a href="https://micro.webology.dev/2024/03/29/update-and-upgrade.html"><strong>Update and upgrade Homebrew and</strong></a><a href="https://micro.webology.dev/2024/03/29/update-and-upgrade.html"> </a><a href="https://micro.webology.dev/2024/03/29/update-and-upgrade.html"><strong><code>xz</code></strong></a><a href="https://micro.webology.dev/2024/03/29/update-and-upgrade.html"> <strong>versions</strong></a></li> </ul></li> <li>Notes <ul> <li>Open source maintenance burnout is real</li> <li>Lots of open source projects are maintained by unpaid individuals for long periods of time.</li> <li>Multi-year sneakiness and social bullying is pretty hard to defend against.</li> <li>Handing off projects to another primary maintainer has to be doable. <ul> <li>But now I think we need better tools to vet contributors. </li> <li>Maybe? Or would that just suppress contributions?</li> </ul></li> </ul></li> <li>One option to help with burnout: <ul> <li>JGMM, Just Give Maintainers Money: <a href="https://blog.glyph.im/2024/03/software-needs-to-be-more-expensive.html"><strong>Software Needs To Be More Expensive</strong></a> - Glyph</li> </ul></li> </ul> <p><strong>Michael #3:</strong> <a href="https://lpython.org">LPython</a></p> <ul> <li>LPython aggressively optimizes type-annotated Python code. It has several backends, including LLVM, C, C++, and WASM. </li> <li>LPython’s primary tenet is speed.</li> <li>Play with the wasm version here: <a href="https://dev.lpython.org">dev.lpython.org</a></li> <li>Still in alpha, so keep that in mind.</li> </ul> <p><strong>Brian #4:</strong> <a href="https://github.com/treyhunner/dramatic"><strong>dramatic</strong></a></p> <ul> <li>Trey Hunner</li> <li>More drama in the software world. This time in the Python. </li> <li>Actually, this is just a fun utility to make your Python output more dramatic.</li> <li>More fun output with <a href="https://github.com/ChrisBuilds/terminaltexteffects">terminaltexteffects</a> <ul> <li>suggested by Allan</li> </ul></li> </ul> <p><strong>Extras</strong> </p> <p>Brian:</p> <ul> <li><a href="https://github.com/Textualize/textual/releases/tag/v0.55.0">Textual how has a new inline feature in the new release.</a></li> </ul> <p>Michael:</p> <ul> <li>My keynote talk is out: <a href="https://www.youtube.com/watch?v=coz1CGRxjQ0">The State of Python in 2024</a></li> <li>Have you browsed your <a href="https://github.com">github feed</a> lately?</li> <li><a href="https://pythoninsider.blogspot.com/2024/03/python-31014-3919-and-3819-is-now.html">3.10, 3.9, 3.8 security updates</a></li> </ul> <p><strong>Joke:</strong> <a href="https://python-bytes-static.nyc3.digitaloceanspaces.com/definition-of-methodolgy-terms.jpg">Definition of terms</a></p>
Categories: FLOSS Project Planets

Python Software Foundation: New Open Initiative for Cybersecurity Standards

Planet Python - Mon, 2024-04-01 23:00

The Python Software Foundation is pleased to announce our participation in co-starting a new Open Initiative for Cybersecurity Standards collaboration with the Apache Software Foundation, the Eclipse Foundation, other code-hosting open source foundations, SMEs, industry players, and researchers. This collaboration is focused on meeting the real challenges of cybersecurity in the open source ecosystem, and demonstrating full cooperation with and supporting the implementation of the European Union’s Cyber Resilience Act (CRA). With our combined efforts, we are optimistic that we will reach our goal of establishing common specifications for secure open source development based on existing open source best practices. 

New regulations, such as those in the CRA, highlight the need for secure by design and strong supply chain security standards. The CRA will lead to standard requests from the Commission to the European Standards Organisations and we foresee requirements from the United States and other regions in the future. As open source foundations, we want to respond to these requests proactively by establishing common specifications for secure software development and meet the expectations of the newly defined term Open Source Steward. 

Open source communities and foundations, including the Python community, have long been practicing and documenting secure software development processes. The starting points for creating common specifications around security are already there, thanks to millions of contributions to hundreds of open source projects. In the true spirit of open source, we plan to learn from, adapt, and build upon what already exists for the collective betterment of our greater software ecosystem. 

The PSF’s Executive Director Deb Nicholson will attend and participate in the initial Open Initiative for Cybersecurity Standards meetings. Later on, various PSF staff members will join in relevant parts of the conversation to help guide the initiative alongside their peers. The PSF looks forward to more investment in cybersecurity best practices by Python and the industry overall. 

This community-driven initiative will have a lasting impact on the future of cybersecurity and our shared open source communities. We welcome you to join this collaborative effort to develop secure open source development specifications. Participate by sharing your knowledge, input, and raising up existing community contributions. Sign up for the Open Initiative for Process Specifications mailing list to get involved and stay updated on this initiative. Check out the press release's from the Eclipse Foundation’s and the Apache Software Foundation for more information.

Categories: FLOSS Project Planets

SoK 2024 - Implementing package management features from RKWard into Cantor via a GUI First Blog

Planet KDE - Mon, 2024-04-01 20:00
Introduction

Hi! I’m Krish, an undergraduate student at the University of Rochester studying Computer Science and this KDE Season of Code I’m working on implementing package management features from RKWard into Cantor via a GUI. I’m being mentored by Alexander Semke.

In an effort to improve usability and functionality, this project seeks to strengthen Cantor's capabilities as a scientific computing platform by incorporating package management tools modeled after those found in RKWard and RStudio. The goal is to create an intuitive graphical interface within Cantor for managing packages in R, Octave, Julia, and other languages.

Set up development environment for Cantor
  • Used virt-manager to setup an “Kubuntu” virtual machine
  • Installed dependencies for Cantor
  • Open a terminal emulator and in your favorite shell:
cd cantor mkdir build cd build cmake .. -DCMAKE_INSTALL_PREFIX=$HOME/cantor/usr/local -DCMAKE_BUILD_TYPE=RELEASE make make install Digging into RKWard's package management system

RKWard, an open-source, cross-platform integrated development environment for the R programming language, implements package management using R's built-in package management system and provides a GUI to simplify package installation, updating, and removal. This writeup will discuss the technical aspects of package management in RKWard.

Broadly, RKWard leverages R's built-in package management functions, such as install.packages(), update.packages(), and remove.packages(), to handle package management tasks. These functions interact with R's package repository (CRAN by default) and local package libraries to perform package-related operations.

RKWard's package management GUI is built on top of these R functions, allowing users to perform package management tasks without directly interacting with the command-line interface. The GUI provides a more user-friendly experience and enables users to manage packages with just a few clicks.

When a user requests to install or update a package, RKWard performs the following technical steps:

  • Dependency resolution: RKWard checks for dependencies required by the package and resolves them using R's available.packages() and installed.packages() functions. If dependencies are not satisfied, RKWard prompts the user to install them automatically.
  • CMake Configuration: The FindR.cmake script is used during the compilation of RKWard to locate the R installation and its components, including the R library directory. This information is necessary for RKWard to interface with R and manage packages.
  • Package download and installation: RKWard downloads package source code or binary files from CRAN or other repositories using R's download.file() function. The packages are then installed using install.packages() with appropriate options, such as specifying the library directory and installing dependencies.
  • Package library management: RKWard installs packages in the R library directory, which is typically located in the user's home folder. The library directory can be configured in RKWard's settings. RKWard ensures that packages are installed in the correct library directory, depending on the user's R version and operating system.
  • Integration with R Console: RKWard includes an embedded R console, which allows users to see the output of package management commands and interact with them directly if needed. Error Handling: RKWard provides error messages and troubleshooting advice if package installation or loading fails. This can include issues like missing dependencies, compilation errors, or permissions problems.
  • Package loading: After installation, RKWard loads the package into the current R session using R's library() or require() functions, making its functions and datasets available for use.
  • RKWard supports package management on various platforms, including Windows, macOS, and Linux. On Windows and macOS, RKWard typically installs packages as pre-compiled binaries for improved performance and ease of installation. On Linux, RKWard installs packages from source, which may require additional development libraries or tools to be installed on the user's system.

RKWard also supports installing packages from local files, Git repositories, and other sources by providing options to specify custom package repositories or URLs. This flexibility allows users to manage their R packages according to their specific needs.

In summary, RKWard implements package management using R's built-in package management functions and provides a user-friendly GUI to simplify package installation, updating, and removal. By leveraging R's package management system, RKWard enables users to manage their R packages efficiently and effectively, regardless of their operating system or R version.

Next Steps
  • Begin implementing package management features in Cantor.
  • Focus on creating a consistent and user-friendly interface for package management.
Categories: FLOSS Project Planets

Hynek Schlawack: Python Project-Local Virtualenv Management Redux

Planet Python - Mon, 2024-04-01 20:00

One of my first TIL entries was about how you can imitate Node’s node_modules semantics in Python on UNIX-like operating systems. A lot has happened since then (to the better!) and it’s time for an update. direnv still rocks, though.

Categories: FLOSS Project Planets

Open Source AI Definition – Weekly update April 2

Open Source Initiative - Mon, 2024-04-01 17:10
Seeking document reviewers for Pythia and OpenCV
  • We are now in the process of reviewing legal documents to check the compatibility with the version 0.0.6 definition of open-source AI, specifically for Pythia and OpenCV.
    • Click here to see the past activities of the four working groups
  • To get involved, respond on the forum or message Mer here.
The data requirement: “Sufficiently detailed information” for what?
  • Central question: What criteria define “sufficiently detailed information”?
    • There is a wish to change the term “Sufficiently detailed information” to “Sufficiently detailed to allow someone to replicate the entire dataset” to avoid vagueness and solidify reproducibility as openness
  • Stefano points out that “reproducibility” in itself might not be a sustainable term due to its loaded connotations.
  • There’s a proposal to modify the Open Source AI Definition requirement to specify providing detailed information to replicate the entire dataset.
    • However, concerns arise about how this would apply to various machine learning methods where dataset replication might not be feasible.
Action on the 0.0.6 draft
  • Contribution concerned with the usage of the wording “deploy” under “Out of Scope Issues” in relation to code alone.
    • OSI has replied asking for clarification on the question, as “deploy” refers to the whole AI system, not just the code.
  • Contribution concerned with the wording of “learning, using, sharing and improving software systems” under “Why We Need Open Source Artificial Intelligence”. Specifically, when relating to AI as opposed to “traditional” software, there is a growing concern that these values might be broad compared to the impact, in terms of safety and ethics, AI can have.
    • OSI replied that while the ethics of AI will continue to be discussed, these discussions are out of the scope of this definition. This will be elaborated on in an upcoming FAQ.
Categories: FLOSS Research

The Drop Times: Drupal Page Builders—Part 3: Other Alternative Solutions

Planet Drupal - Mon, 2024-04-01 16:04
Venture into the realm of alternatives to Paragraphs and Layout Builder with the third installment of the Drupal Page Builder series by André Angelantoni, Senior Drupal Architect at HeroDevs, showcased on The DropTimes. This segment navigates through a variety of server-side rendered page generation solutions, offering a closer look at innovative modules that provide a broader range of page-building capabilities beyond Drupal's native tools. From the adaptability of Component Builder and the intuitive DXPR Page Builder to the cutting-edge HAX module utilizing W3C-standard web components, this article illuminates a path for developers seeking polished, ready-made components for their site builds. Before exploring advanced Drupal solutions, ensure you're caught up by reading the first two parts of the series, laying the groundwork for a comprehensive understanding of Drupal's extensive page-building ecosystem.
Categories: FLOSS Project Planets

The Drop Times: DrupalCamp Ouagadougou Concludes Successfully

Planet Drupal - Mon, 2024-04-01 16:04
Experience the highlights of DrupalCamp Ouagadougou! Dive into captivating pictures and relive the vibrant atmosphere of this successful event.
Categories: FLOSS Project Planets

Talking Drupal: Talking Drupal #444 - Design to Development Workflow Optimization

Planet Drupal - Mon, 2024-04-01 14:00

Today we are talking about design to development hand off, common complications, and ways to optimize your process with guest Crispin Bailey. We’ll also cover Office Hours as our module of the week.

For show notes visit: www.talkingDrupal.com/444

Topics
  • Primary activities of the team
  • Where does handoff start
  • Handoff artifact
  • Tools for collaboration
  • Figma
  • Evaluating new tools
  • Challenges of developers and designers working together
  • How can we optimize handoff
  • What steps can the dev team take to facilitate smooth handoff
  • Framework recommendation
  • Final quality
  • AI
Guests

Crispin Bailey - kalamuna.com crispinbailey

Hosts

Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Anna Mykhailova - kalamuna.com amykhailova

MOTW Correspondent

Martin Anderson-Clutz - mandclu

  • Brief description:
    • Have you ever wanted to manage and display the hours of operation for a business on your Drupal site? There’s a module for that
  • Module name/project name:
  • Brief history
    • How old: created in Jan 2008 by Ozeuss, though recent releases are by John Voskuilen of the Netherlands
    • Versions available: 7.x-1.11 and 8.x-1.17
  • Maintainership
    • Actively maintained, latest release was 3 weeks ago
    • Security coverage
    • Test coverage
    • Documentation: no user guide, but a pretty extensive README
    • Number of open issues: 15 open issues, only 1 of which are bugs against the current branch, though it’s postponed for more info
  • Usage stats:
    • Almost 20,000 sites
  • Module features and usage
    • Previously covered in episode 113, more than 8 years ago, in the “Drupal 6 end of life” episode
    • The module provides a specialized widget to set the hours for each weekday, with the option to have more than one time slot per day
    • You can define exceptions, for example on stat holidays
    • You can also define seasons, with a start and end date, during which the hours are different
    • The module also offers a variety of options for formatting the output:
    • You can show days as ranges, for example Monday to Friday, 9am to 5pm, 12-hour or 24-hour clocks, and so on
    • Obviously it will show any exceptions or upcoming seasonal hours too
    • It can also show an “open now” or “closed now” indicator
    • It can create schema.org-compliant markup for openingHours, and has integration with the Schema.org Metatag module
    • Office Hours does all this with a new field type, so you could add it to Stores in a Drupal Commerce site, a Locations content type in a site for a bricks-and-mortar chain, or if you just need a single set of hours for the site, you should be able to use it with something like the Config Pages module
    • The README file also includes some suggestions on how to use Office Hours with Views, which can give you a lot of flexibility on where and how to show the information
   
Categories: FLOSS Project Planets

The Drop Times: The Power of Embracing New Challenges and Technologies

Planet Drupal - Mon, 2024-04-01 13:04

“The greater the obstacle, the more glory in overcoming it.” – Molière


Dear Readers,

Stepping out of our comfort zones is undoubtedly a daunting task. Yet, it's precisely this leap into the unknown that often leads to remarkable growth and self-discovery. Embracing new challenges and learning from scratch can feel overwhelming at first, but through these experiences, we truly push our limits and uncover our hidden capabilities.

In our journey of embracing the unfamiliar, we expand our skill sets and gain a deeper understanding of ourselves and the paths we never thought possible. Each new challenge becomes an opportunity to stretch beyond what we thought we were capable of, illuminating uncharted territories of potential and opportunity.

Embrace the technological diversity surrounding us, as it serves as a rich tapestry of tools and methodologies that can enhance our creativity, efficiency, and impact in ways we've only begun to explore. Like the inspiring journey of Tanay Sai, a seasoned builder, engineering leader, and AI/ML practitioner who recently embarked on a transformative adventure beyond the familiar horizons of Drupal. Tanay's story is a testament to the idea that stepping out of one's comfort zone can lead to groundbreaking achievements and a deeper understanding of the multifaceted digital ecosystem.

The importance of continuous learning and the willingness to embrace new challenges is profound. It encourages us to look beyond the familiar, to experiment with emerging technologies, and to remain adaptable in our pursuit of delivering exceptional digital experiences.

Now, Let's take a moment to revisit the highlights from last week's coverage at The Drop Times.

Last month, we celebrated the Women in Drupal community and released the second part of "Inspiring Inclusion: Celebrating the Women in Drupal | #2" penned by Alka Elizabeth. Part 3 of this series will be coming soon.

Explore the dynamic evolution of Drupal's page-building features in Part 2 of André Angelantoni's latest series on The Drop Times. Each module discussed extends Layout Builder and can be integrated individually. Part 3 might already be released by the time this newsletter comes your way. Access the second part here.

DrupalCon Portland 2024, scheduled from May 6 to 9, will feature an empowering Women in Drupal Lunch event. This gathering aims to uplift female attendees and inspire and support women within the Drupal community. Learn more here.

Save the dates for DrupalCamp Spain 2024 in Benidorm! The event is scheduled for October 25 and 26, with the venue to be announced soon. Additionally, mark your calendars for October 24, designated as Business Day.

DrupalCamp Belgium has unveiled the keynote speakers for its highly anticipated 2024 edition in Ghent. For more information and to discover the lineup of keynote speakers, be sure to check out the details here. A complete list of events for the week is available here. Additionally, Gander Documentation is now available on Drupal.org, as announced by Janez Urevc, Tag1 Consulting's Strategic Growth and Innovation Manager, on March 25, 2024.

Also, read about Tanay Sai, an accomplished builder, engineering leader, and AI/ML practitioner who shares insights into his transformative journey beyond Drupal. After a decade immersed in the Drupal realm, Tanay candidly expresses his pivotal decision to venture beyond its confines. Learn more here.

Monika Branicka of Droptica conducts a comprehensive analysis of content management systems (CMS) employed by 314 higher education institutions in Poland. This study aims to unveil the prevalent CMS preferences among both public and non-public universities, providing insights into the educational sector's technological landscape. The report comes amidst a growing call for resources to track Drupal usage across industry sectors, coinciding with similar studies conducted by The DropTimes and Grzegorz Pietrzak.

DevBranch has announced the launch of a Drupal BootCamp tailored for aspiring web developers. This initiative aims to equip individuals with the necessary skills and knowledge to excel in web development using Drupal. For further details, click here.

The development of Drupal 11 has reached a critical phase, marked by ongoing updates to its system requirements within the development branch. Gábor Hojtsy has provided valuable insights on preparing core developers and informing the community about these changes. Stay updated on the latest developments as Drupal 11 evolves to meet the needs of its users and developers.

We acknowledge that there are more stories to share. However, due to selection constraints, we must pause further exploration for now.

To get timely updates, follow us on LinkedIn, Twitter and Facebook. Also, join us on Drupal Slack at #thedroptimes.

Thank you,
Sincerely
Elma John
Sub-editor, TheDropTimes.

Categories: FLOSS Project Planets

Luke Plant: Enforcing conventions in Django projects with introspection

Planet Python - Mon, 2024-04-01 11:05

Naming conventions can make a big difference to the maintenance issues in software projects. This post is about how we can use the great introspection capabilities in Python to help enforce naming conventions in Django projects.

Contents

Let’s start with an example problem and the naming convention we’re going to use to solve it. There are many other applications of the techniques here, but it helps to have something concrete.

The problem: DateTime and DateTimeField confusion

Over several projects I’ve found that inconsistent or bad naming of DateField and DateTimeField fields can cause various problems.

First, poor naming means that you can confuse them for each other, and this can easily trip you up. In Python, datetime is a subclass of date, so if you use a field called created_date assuming it holds a date when it actually holds a datetime, it might be not obvious initially that you are mishandling the value, but you’ll often have subtle problems down the line.

Second, sometimes you have a field named like expired which is actually the timestamp of when the record expired, but it could easily be confused for a boolean field.

Third, not having a strong convention, or having multiple conventions, leads to unnecessary time wasted on decisions that could have been made once.

Finally, inconsistency in naming is just confusing and ugly for developers, and often for users further down the line, because names tend to leak.

Even if you do have an established convention, it’s possible for people not to know. It’s also very easy for people to change a field’s type between date and datetime without also changing the name. So merely having the convention is not enough, it needs to be enforced.

Note

If you want to change the name and type of a field (or any other atribute), and want the data to preserve data as much as possible, you usually need to do it in two stages or more depending on your needs, and always check the migrations created – otherwise Django’s migration framework will just see one field removed and a completely different one added, and generate migrations that will destroy your data.

For this specific example, the convention I quite like is:

  • field names should end with _at for timestamp fields that use DateTimeField, like expires_at or deleted_at.

  • field names should end with _on or _date for fields that use DateField, like issued_on or birth_date.

This is based on the English grammar rule that we use “on” for dates but “at” for times – “on the 25th March”, but “at 7:00 pm” – and conveniently it also needs very few letters and tends to read well in code. The _date suffix is also helpful in various contexts where _on seems very unnatural. You might want different conventions, of course.

To get our convention to be enforced with automated checks we need a few tools.

The tools Introspection

Introspection means the ability to use code to inspect code, and typically we’re talking about doing this when our code is already running, from within the same program and using the same programming language.

In Python, this starts from simple things like isintance() and type() to check the type of an object, to things like hasattr() to check for the presence of attributes and many other more advanced techniques, including the inspect module and many of the metaprogramming dunder methods.

Django app and model introspection

Django is just Python, so you can use all normal Python introspection techniques. In addition, there is a formally documented and supported set of functions and methods for introspecting Django apps and models, such as the apps module and the Model _meta API.

Django checks framework

The third main tool we’re going to use in this solution is Django’s system checks framework, which allows us to run certain kinds of checks, at both “warning” and “error” level. This is the least important tool, and we could in fact switch it out for something else like a unit test.

The solution

It’s easiest to present the code, and then discuss it:

from django.apps import apps from django.conf import settings from django.core.checks import Tags, Warning, register @register() def check_date_fields(app_configs, **kwargs): exceptions = [ # This field is provided by Django's AbstractBaseUser, we don't control it # and we’ll break things if we change it: "accounts.User.last_login", ] from django.db.models import DateField, DateTimeField errors = [] for field in get_first_party_fields(): field_name = field.name model = field.model if f"{model._meta.app_label}.{model.__name__}.{field_name}" in exceptions: continue # Order of checks here is important, because DateTimeField inherits from DateField if isinstance(field, DateTimeField): if not field_name.endswith("_at"): errors.append( Warning( f"{model.__name__}.{field_name} field expected to end with `_at`, " + "or be added to the exceptions in this check.", obj=field, id="conventions.E001", ) ) elif isinstance(field, DateField): if not (field_name.endswith("_date") or field_name.endswith("_on")): errors.append( Warning( f"{model.__name__}.{field_name} field expected to end with `_date` or `_on`, " + "or be added to the exceptions in this check.", obj=field, id="conventions.E002", ) ) return errors def get_first_party_fields(): for app_config in get_first_party_apps(): for model in app_config.get_models(): yield from model._meta.get_fields() def get_first_party_apps() -> list[AppConfig]: return [app_config for app_config in apps.get_app_configs() if is_first_party_app(app_config)] def is_first_party_app(app_config: AppConfig) -> bool: if app_config.module.__name__ in settings.FIRST_PARTY_APPS: return True app_config_class = app_config.__class__ if f"{app_config_class.__module__}.{app_config_class.__name__}" in settings.FIRST_PARTY_APPS: return True return False

We start here with some imports and registration, as documented in the “System checks” docs. You’ll need to place this code somewhere that will be loaded when your application is loaded.

Our checking function defines some allowed exceptions, because there are some things out of our control, or there might be other reasons. It also mentioned the exceptions mechanism in the warning message. You might want a different mechanism for exceptions here, but I think having some mechanism like this, and advertising its existence in the warnings, is often pretty important. Otherwise, you can end up with worse consequences when people just slavishly follow rules. Notice how in the exception list above I’ve given a comment detailing why the exception is there though – this helps to establish a precedent that exceptions should be justified, and the justification should be there in the code.

We then loop through all “first party” model fields, looking for DateTimeField and DateField instances. This is done using our get_first_party_fields() utility, which is defined in terms of get_first_party_apps(), which in turn depends on:

The id values passed to Warning here are examples – you should change according to your needs. You might also choose to use Error instead of Warning.

Output

When you run manage.py check, you’ll then get output like:

System check identified some issues: WARNINGS: myapp.MyModel.created: (conventions.E001) MyModel.created field expected to end with `_at`, or be added to the exceptions in this check. System check identified 1 issue (0 silenced).

As mentioned, you might instead want to run this kind of check as a unit test.

Conclusion

There are many variations on this technique that can be used to great effect in Django or other Python projects. Very often you will be able to play around with a REPL to do the introspection you need.

Where it is possible, I find doing this far more effective than attempting to document things and relying on people reading and remembering those docs. Every time I’m tripped up by bad names, or when good names or a strong convention could have helped me, I try to think about how I could push people towards a good convention automatically – while also giving a thought to unintended bad consequences of doing that prematurely or too forcefully.

Categories: FLOSS Project Planets

Marknote 1.1.0

Planet KDE - Mon, 2024-04-01 11:05

Marknote 1.1.0 is out! Marknote is the new WYSIWYG note-taking application from KDE. Despite the latest release being just a few days ago, we have been hard at work and added a few new features and, more importantly, fixed some bugs.

Marknote now boasts broader Markdown support, and can now display images and task lists in the editor. And once you are done editing your notes, you can export them to various formats, including PDF, HTML and ODT.

Export to PDF, HTML and ODT

Marknote’s interface now seamlessly integrates the colors assigned to your notebooks, enhancing its visual coherence and making it easier to distinguish one notebook from another. Additionally, your notebooks remember the last opened note, automatically reopening it upon selection.

Accent color in list delegate

We’ve also introduced a convenient command bar similar to the one in Merkuro. This provides quick access to essential actions within Marknote. Currently it only creates a new notebook and note, but we plan to make more actions available in the future. Finally we have reworked all the dialogs in Markdown to use the newly introduced FormCardDialog from KirigamiAddons.

Command bar

We have created a small feature roadmap with features we would like to add in the future. Contributions are welcome!

Packager section

You can find the package on download.kde.org and it has been signed with my GPG key.

Note that this release introduce a new recommanded dependencies: md4c and require the latest Kirigami Addons release (published a few hours ago).

Categories: FLOSS Project Planets

Pages