티스토리 수익 글 보기
So, I tried the module fractions, thus:
from fractions import Fraction
result = Fraction(3802951800684688204490109616126, 1) * Fraction(1, 2)
print(result)
# Show the exact numerator and denominator
print(result.numerator)
print(result.denominator)
with the result
1901475900342344102245054808063
1901475900342344102245054808063
1
That serves my purpose (working on the Collatz Conjecture).
I could have used floored division (//) but I don’t trust my code enough to be absolutely sure that I am dividing an even number. (I am using a protective test on the denominator being 1, just in case.)
Is there another idea that might also be useful? My current work is up to 2^100, but I’ll be doing checks at much larger indices later.
2 posts – 2 participants
]]>I’m developing a FastAPI application and would like to place a bunch of APIRouter instances in a directory. I don’t want to manually import each of the routers. I have found a way to make it dynamic, but it is a bit of an eyesore:
# routers/__init__.py
from importlib import import_module
import pkgutil
from pathlib import Path
from fastapi import APIRouter
routers: list[APIRouter] = []
for module in pkgutil.walk_packages([Path(__file__).absolute().parent]):
routers.append(import_module(f".{module.name}", __package__).router)
This seems like a hack, and I doubt I will be able to understand what it does in 3 years. Is there a better way?
Thanks!
2 posts – 2 participants
]]>from p5 import *
# DÉFINITION DES NIVEAUX
LEVELS = [
{
"grid": (4, 4), # taille de la grille : 4 colonnes x 4 lignes
"start": (0, 0), # nœud de départ
"end": (4, 4), # nœud d'arrivée
"blocked": [(1,1),(2,1)],# cases interdites
"symbols": [(0,2,'circle'), (3,1,'circle')] # symboles à visiter avant l'arrivée
},
{
"grid": (5, 5),
"start": (0, 4),
"end": (4, 0),
"blocked": [(2,2),(1,3)],
"symbols": [(0,2,'circle'), (2,4,'circle')]
},
{
"grid": (6, 6),
"start": (0, 0),
"end": (5, 5),
"blocked": [(2,2),(3,3),(1,4)],
"symbols": [(1,1,'circle'), (4,4,'circle')]
}
]
# CONFIGURATION VISUELLE
CELL = 80 # taille d'une cellule
MARGIN = 40 # marge autour de la grille pour le dessin
SNAP_RADIUS = CELL / 1.5 # rayon pour “attraper” le nœud avec le clic
# VARIABLES DU JEU
current_level = 0 # niveau actuel
path = [] # liste des nœuds visités pour le chemin
lines_drawn = set() # lignes déjà tracées (pour ne pas repasser dessus)
game_finished = False
# FONCTIONS UTILITAIRES
def node_position(node):
"""
Convertit un nœud (col, row) en coordonnées pixels pour l'affichage
"""
x, y = node
return MARGIN + x * CELL, MARGIN + y * CELL
def distance(x1, y1, x2, y2):
"""
Calcul de la distance euclidienne entre deux points
"""
return ((x1-x2)**2 + (y1-y2)**2)**0.5
def snap_to_node(mx, my, grid):
"""
Retourne le nœud le plus proche du clic
"""
closest = None
min_dist = SNAP_RADIUS
cols, rows = grid
for x in range(cols+1):
for y in range(rows+1):
nx, ny = node_position((x, y))
d = distance(mx, my, nx, ny)
if d <= min_dist:
closest = (x, y)
min_dist = d
return closest
def is_adjacent(a, b):
"""
Vérifie si deux nœuds sont adjacents horizontalement ou verticalement
"""
return abs(a[0]-b[0]) + abs(a[1]-b[1]) == 1
def node_blocked(node):
"""
Vérifie si un nœud est dans la liste des blocs interdits
"""
x, y = node
return (x, y) in LEVELS[current_level].get('blocked', [])
def line_used(a, b):
"""
Vérifie si une ligne entre deux nœuds a déjà été tracée
"""
return ((a,b) in lines_drawn) or ((b,a) in lines_drawn)
# CONFIGURATION DE LA FENÊTRE
def setup():
"""
Initialisation du canvas en fonction de la taille de la grille
"""
cols, rows = LEVELS[current_level]["grid"]
createCanvas(2*MARGIN + cols*CELL, 2*MARGIN + rows*CELL)
noLoop() # dessin manuel uniquement (rafraîchissement avec redraw())
def draw_grid():
"""
Dessine la grille principale
"""
cols, rows = LEVELS[current_level]["grid"]
stroke(80)
for x in range(cols+1):
line(MARGIN+x*CELL, MARGIN, MARGIN+x*CELL, MARGIN+rows*CELL)
for y in range(rows+1):
line(MARGIN, MARGIN+y*CELL, MARGIN+cols*CELL, MARGIN+y*CELL)
def draw_blocks():
"""
Dessine les cases bloquées
"""
fill(0)
noStroke()
for bx, by in LEVELS[current_level].get('blocked', []):
rect(MARGIN+bx*CELL, MARGIN+by*CELL, CELL, CELL)
def draw_symbols():
"""
Dessine les symboles (ex: cercles) que le joueur doit visiter
"""
symbols = LEVELS[current_level].get('symbols', [])
strokeWeight(8)
for x, y, kind in symbols:
nx, ny = node_position((x, y))
if kind=='circle':
stroke(0,0,255)
noFill()
ellipse(nx, ny, 20, 20)
strokeWeight(1)
def draw_points():
"""
Dessine les points de départ (vert) et d'arrivée (rouge)
"""
sx, sy = LEVELS[current_level]["start"]
ex, ey = LEVELS[current_level]["end"]
strokeWeight(10)
stroke(0,255,0)
px, py = node_position((sx, sy))
point(px, py)
stroke(255,0,0)
px, py = node_position((ex, ey))
point(px, py)
strokeWeight(1)
def draw_path():
"""
Dessine le chemin jaune tracé par le joueur
"""
stroke(255,255,0)
strokeWeight(6)
for i in range(len(path)-1):
a = path[i]
b = path[i+1]
x1, y1 = node_position(a)
x2, y2 = node_position(b)
line(x1, y1, x2, y2)
strokeWeight(1)
# INTERACTION : CLICS
# DESSIN
def draw():
"""
Dessine la grille, les blocs, les symboles, les points de départ/arrivée et le chemin
"""
background(30)
global path, lines_drawn, current_level, game_finished
if game_finished:
fill(255)
textSize(32)
textAlign(CENTER, CENTER)
text("JEU TERMINÉ", width/2, height/2)
return
draw_grid()
draw_blocks()
draw_symbols()
draw_path()
draw_points()
if mouseIsPressed:
"""
Chaque clic fait progresser le chemin :
- si clic sur départ et chemin vide : commence le chemin
- si clic sur un nœud adjacent valide : ajoute au chemin
- si clic sur l'arrivée : vérifie symboles et termine le niveau
"""
if game_finished:
return
# détecte le nœud le plus proche du clic
node = snap_to_node(mouseX, mouseY, LEVELS[current_level]["grid"])
if node is None:
return
# si début du chemin
if not path and node == LEVELS[current_level]["start"]:
path.append(node)
lines_drawn = set()
redraw()
return
# si on a déjà commencé le chemin
elif path:
last = path[-1]
# clic sur l'arrivée
if node == LEVELS[current_level]["end"]:
# vérifier que tous les symboles ont été visités
symbols = LEVELS[current_level].get('symbols', [])
visited = set(path)
all_symbols = all((x,y) in visited for x,y,_ in symbols)
if not all_symbols:
print("Vous devez passer sur tous les symboles !")
return
# niveau terminé : passer au niveau suivant
current_level += 1
path = []
lines_drawn = set()
if current_level >= len(LEVELS):
game_finished = True
redraw()
return
# clic sur nœud adjacent valide
elif is_adjacent(last, node) and not node_blocked(node) and not line_used(last, node):
path.append(node)
lines_drawn.add((last, node))
redraw()
run()
it doesn’t work, and I don’t know why… can anyone help me ?
3 posts – 2 participants
]]>To read more about the performance work, and see lots of plots, I wrote a post on it: https://iscinumpy.dev/post/packaging-faster
For the release notes: Release 26.0rc1 · pypa/packaging · GitHub
Please try it out before the final release, which should be in about a week assuming no blockers.
2 posts – 2 participants
]]>import sys
— CONFIG —
SCREEN_WIDTH = 800
SCREEN_HEIGHT = 600
BLOCK_SIZE = 40
WORLD_WIDTH = 20
WORLD_HEIGHT = 15
Colors
SKY = (135, 206, 235)
GRASS = (34, 139, 34)
DIRT = (139, 69, 19)
STONE = (100, 100, 100)
PLAYER_COLOR = (255, 0, 0)
Block IDs
AIR = 0
GRASS_BLOCK = 1
DIRT_BLOCK = 2
STONE_BLOCK = 3
BLOCK_COLORS = {
GRASS_BLOCK: GRASS,
DIRT_BLOCK: DIRT,
STONE_BLOCK: STONE
}
pygame.init()
screen = pygame.display.set_mode((SCREEN_WIDTH, SCREEN_HEIGHT))
pygame.display.set_caption(“Mini Minecraft”)
clock = pygame.time.Clock()
— WORLD GENERATION —
world = [[AIR for _ in range(WORLD_WIDTH)] for _ in range(WORLD_HEIGHT)]
for y in range(WORLD_HEIGHT):
for x in range(WORLD_WIDTH):
if y > 10:
world[y] = STONE_BLOCK
elif y > 8:
world[y] = DIRT_BLOCK
elif y == 8:
world[y] = GRASS_BLOCK
Player
player_x = 5
player_y = 5
— MAIN LOOP —
while True:
clock.tick(60)
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
# Mouse: break / place blocks
if event.type == pygame.MOUSEBUTTONDOWN:
mx, my = pygame.mouse.get_pos()
gx = mx // BLOCK_SIZE
gy = my // BLOCK_SIZE
if 0 <= gx < WORLD_WIDTH and 0 <= gy < WORLD_HEIGHT:
if event.button == 1: # Left click breaks
world[gy][gx] = AIR
elif event.button == 3: # Right click places dirt
world[gy][gx] = DIRT_BLOCK
# Movement
keys = pygame.key.get_pressed()
if keys[pygame.K_a]:
player_x -= 0.1
if keys[pygame.K_d]:
player_x += 0.1
if keys[pygame.K_w]:
player_y -= 0.1
if keys[pygame.K_s]:
player_y += 0.1
screen.fill(SKY)
# Draw world
for y in range(WORLD_HEIG
HT):import pygame
import sys
# --- CONFIG ---
SCREEN_WIDTH = 800
SCREEN_HEIGHT = 600
BLOCK_SIZE = 40
WORLD_WIDTH = 20
WORLD_HEIGHT = 15
# Colors
SKY = (135, 206, 235)
GRASS = (34, 139, 34)
DIRT = (139, 69, 19)
STONE = (100, 100, 100)
PLAYER_COLOR = (255, 0, 0)
# Block IDs
AIR = 0
GRASS_BLOCK = 1
DIRT_BLOCK = 2
STONE_BLOCK = 3
BLOCK_COLORS = {
GRASS_BLOCK: GRASS,
DIRT_BLOCK: DIRT,
STONE_BLOCK: STONE
}
pygame.init()
screen = pygame.display.set_mode((SCREEN_WIDTH, SCREEN_HEIGHT))
pygame.display.set_caption("Mini Minecraft")
clock = pygame.time.Clock()
# --- WORLD GENERATION ---
world = [[AIR for _ in range(WORLD_WIDTH)] for _ in range(WORLD_HEIGHT)]
for y in range(WORLD_HEIGHT):
for x in range(WORLD_WIDTH):
if y > 10:
world[y][x] = STONE_BLOCK
elif y > 8:
world[y][x] = DIRT_BLOCK
elif y == 8:
world[y][x] = GRASS_BLOCK
# Player
player_x = 5
player_y = 5
# --- MAIN LOOP ---
while True:
clock.tick(60)
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
# Mouse: break / place blocks
if event.type == pygame.MOUSEBUTTONDOWN:
mx, my = pygame.mouse.get_pos()
gx = mx // BLOCK_SIZE
gy = my // BLOCK_SIZE
if 0 <= gx < WORLD_WIDTH and 0 <= gy < WORLD_HEIGHT:
if event.button == 1: # Left click breaks
world[gy][gx] = AIR
elif event.button == 3: # Right click places dirt
world[gy][gx] = DIRT_BLOCK
# Movement
keys = pygame.key.get_pressed()
if keys[pygame.K_a]:
player_x -= 0.1
if keys[pygame.K_d]:
player_x += 0.1
if keys[pygame.K_w]:
player_y -= 0.1
if keys[pygame.K_s]:
player_y += 0.1
screen.fill(SKY)
# Draw world
for y in range(WORLD_HEIGHT):
for x in range(WORLD_WIDTH):
block = world[y][x]
if block != AIR:
pygame.draw.rect(
screen,
BLOCK_COLORS[block],
(x * BLOCK_SIZE, y * BLOCK_SIZE, BLOCK_SIZE, BLOCK_SIZE)
)
# Draw player
pygame.draw.rect(
screen,
PLAYER_COLOR,
(int(player_x * BLOCK_SIZE), int(player_y * BLOCK_SIZE), BLOCK_SIZE, BLOCK_SIZE)
)
pygame.display.flip()
for x in range(WORLD_WIDTH):
block = world[y][x]
if block != AIR:
pygame.draw.rect(
screen,
BLOCK_COLORS[block],
(x * BLOCK_SIZE, y * BLOCK_SIZE, BLOCK_SIZE, BLOCK_SIZE)
)
# Draw player
pygame.draw.rect(
screen,
PLAYER_COLOR,
(int(player_x * BLOCK_SIZE), int(player_y * BLOCK_SIZE), BLOCK_SIZE, BLOCK_SIZE)
)
pygame.display.flip()
2 posts – 2 participants
]]>type() defines only (B, S) as semantic axes. This proposes an opt-in axes={...} parameter to make framework metadata first-class and introspectable, without grammar changes. Legacy classes unchanged; opt-in classes get __axes__.
Per Guido van Rossum’s suggestion (personal email, Jan 6, 2026), posting to Typing for review.
The Problem
Python’s type(name, bases, namespace) has two semantic axes:
- B (
__bases__): inheritance hierarchy - S (
__dict__): attributes and methods
Frameworks need more. In OpenHCS (microscopy automation), we need scope, registry membership, priority—none of which type() provides. The workaround is packing them into S:
class MyStep(Step):
__scope__ = "/pipeline/step_0" # framework axis packed into namespace
__registry__ = "step_handlers" # another axis packed into namespace
This works, but:
- Flattens independent axes into a single namespace, which loses per-axis inheritance and creates metadata/method collisions (orthogonality is a proven result, not an assumption. See Paper 1).
- Requires per-framework metaclass machinery
- Not uniformly introspectable
- A type checker can’t distinguish
__scope__(metadata) fromscope()(method)
One immediate payoff of first-class axes is stronger type-based dispatch: frameworks can distinguish classes via axes + MRO without probing ad-hoc attributes at runtime.
Proposed Solution
Add an opt-in axes parameter to type():
MyStep = type("MyStep", (Step,), {"process": fn},
axes={"scope": "/pipeline/step_0", "registry": STEP_REGISTRY})
MyStep.__axes__ # {"scope": "/pipeline/step_0", "registry": ...}
Key properties:
- Opt-in: No axes = current behavior unchanged
- No grammar change: use
axes_type/with_axestoday; class-statement keywords could be future sugar - Inheritance: Per-key MRO resolution, leftmost wins unless overridden
- Not core identity: CPython’s
isinstance/issubclassstay keyed on (B, S); axes are framework-level metadata
Why not just metaclasses? Metaclasses can stash metadata, but every framework invents its own dunders. A uniform __axes__ surface makes detection, tooling, and interop predictable.
Working Prototype
I have a working implementation:
from parametric_axes import axes_type, with_axes
MyStep = axes_type("MyStep", (Step,), {},
scope="/pipeline/step_0",
registry=STEP_REGISTRY)
MyStep.__axes__ # {"scope": "/pipeline/step_0", ...}
MyStep.__scope__ # convenience attribute
Features: inheritance works, __axes__ is a MappingProxyType, optional TypedDict schema for static checkers.
Prototype: GitHub – trissim/ObjectState: Generic lazy dataclass configuration framework with dual-axis inheritance and contextvars-based resolution (MIT)
Typing Interaction
- Axes are runtime metadata, orthogonal to
__annotations__ - Static tools MAY read
__axes__to validate known keys via optional schema - Unknown axes are not type errors unless framework opts into validation
- Tiny protocol for checkers:
class HasAxes(Protocol): __axes__: Mapping[str, Any]
Current Positions (seeking feedback)
- Extend
type(): opt-inaxesparameter is the core proposal; no new construct needed. - Schemas: framework-defined by default; a tiny optional standard schema could exist, but not required.
- Static checkers: runtime-only by default; opt-in schema when provided.
Open Question
- Any MRO edge cases beyond per-key resolution we should pin down?
I have a draft PEP if there’s interest. Happy to hear whether this aligns with typing’s goals.
Background: I’ve formalized why frameworks need extensible axes and why Python is uniquely suited for this. Happy to share the formal analysis if useful, but didn’t want to bury the proposal in theory. Paper found here
16 posts – 5 participants
]]>The SC also decided to publish the full 2025 PSC meeting summaries for those who are not active on the Discord server.
Finally, we would like to thank @emily and @gpshead for their service as members of the 2025 Python Steering Council.
Warm regards from very cold Seoul,
Donghee
on behalf of the Python Steering Council
=========================================
2025-03-27 PSC Meeting Summary
- Started to discuss in depth PEP 749. We have some pending questions for the typing council but we are not done yet.
- Started to discuss in depth what is needed to move to phase 2 of free threaded Python, communication around it, problems and challenges and how to identify as many problems early as possible. (We are trying to prioritize unblocking stuff)
- We have a bunch of pending notices that will get done soon (like the comments on PEP 750)
2025-04-03 PSC Meeting Summary
- The remaining items for PEP 749 (Implementing PEP 649). We have finalized our questions for the Typing Council and will send these once we get a final signoff from all SC members.
- The list of “nitpicks” for PEP 750 (Template Strings), which will also be sent pending final approval.
- A check-in on outstanding PEPs and issues that should land for 3.14, namely PEP 773 (A Python Installation Manager for Windows).
- Planning for the Language Summit and gathering feedback from the community on free-threading
2025-04-17 PSC Meeting Summary
- Synced with DiR Łukasz, his internet outage held off. (new podcast ep. up, PSF blog about sprints, triager stuff, pycon us summit stuff)
- The next two weeks have office hours slots booked, yay communication!
- Initial PEP 784
compression.zstdaddition discussion. - PEP 785 –
ExceptionGroup.leaf_exceptionsetc. API addition – seems simple enough, but allowing time for community input. poked the discuss thread – @iritkatriel and @yselivanov as perceived domain experts – any opinions? reply on discuss - PEP 779 – free-threading PEP-703 phase change? – waiting until after talking to people at PyCon US, the RM confirmed okay if we pronounce on that before the 3.14beta3. (we let PEP authors know)
- PEP 773 consensus confirmed, reply has been sent.
- PEP 749 implementing PEP 649 – we did not get to this today, refreshing on the PR updating the PEP & related email is our homework.
2025-04-24 PSC Meeting Summary
- We were joined in Office Hours by Brandt Bucher and Savannah Ostrowski to discuss PEP 744 (JIT Compilation) and the general plans for the JIT in Python 3.14 and beyond. I’ll leave it to them to summarize that discussion and action items, but from the SC’s perspective, we found it incredibly useful and productive, so we want to thank them for joining us.
- After Office Hours, the SC discussed PEP 784 – Zstandard to the standard library. We’re gathering some additional information before we can make a pronouncement.
- We discussed PEP 773, which we still plan on officially accepting tomorrow (2025-04-25)
- We discussed Petr’s request to remove a couple of C API functions related to recursion limits, without a deprecation period. This was unanimously recommended by the C API WG, and the SC has agreed.
- We discussed a response from Jelle to our initial feedback on PEP 749. We ran out of time so we’ll complete our response asynchronously.
- We had some additional discussions regarding future OH topics.
- We briefly discussed the SC panel session slated for Pycon.
2025-05-01 PSC Meeting Summary
- Synced with DiR Łukasz on several topics, including syntax highlighting in the REPL PR, PyCon US sprint preparations, and debugging ongoing CLA bot issues.
- PEP 749: Collected final feedback from Steering Council members based on the Typing Council’s response and preparing an official reply.
- Discussed PyCon US panel session:
- Agreed to balance between pre-submitted and live questions.
- Emily has prepared this year’s presentation template (Thanks, Emily!).
2025-05-08 PSC Meeting Summary
- OH with Eric Snow: Reviewed PEP 734 in light of current PyPI usage data and discussed next steps.
- Held a detailed discussion on the free-threading initiative.
- Finalized planning for the PyCon SC panel session, confirming the schedule, presentation topics, and speaker format.
2025-05-15 to 2025-05-22
- Meeting skipped during PyCon US weeks.
2025-05-29 PSC Meeting Summary
- Synced with DiR Łukasz:
- Discuss transition plans for the 3.13.4 macOS release, including new Apple signing keys and improvements to buildbot.
- Review the sprint feedback at PyCon US.
- Discussed potential adjustments to the Language Summit format, such as topic-based scheduling and better support for attendee interaction.
- Continued discussion on PEP 779, focusing on defining stable C API requirements and identifying key extension categories for readiness.
- PEP 734 is progressing toward acceptance, with initial discussion around module naming and import paths.
2025-06-05 PSC Meeting Summary
- OH with Michael Droettboom: Demoed a benchmarking system for faster-cpython using bare-metal machines. Discussions are ongoing around security, infrastructure setup, and potential PSF funding.
- Reviewed the core developer promotion process and proposed improvements, including better feedback mechanisms and follow-up mentoring.
- Finalized the Council’s response to PEP 734.
- Continued discussions on the requirements and expectations for PEP 779, including documentation, performance targets, and API stability.
2025-06-12 PSC Meeting Summary
- DiR Update with DiR Łukasz:
- Resolved multiple regressions in Python 3.13.5.
- Discussed adding bytecode regression checks to CI.
- Talked about forming a Release Manager team and identifying future RM candidates.
- Finalized the response to PEP 779.
- Reviewed the current status of PEP 734 and discussed next steps.
- Approved Peter’s promotion: no objections.
- Continued discussion on core dev’s promotion timelines and expectations.
2025-06-19 PSC Meeting Summary
- Clarified C API terminology (Stable ABI / Limited C API) in the PEP 779 announcement. An update to the announcement is planned soon.
- Reviewed and discussed PEP 782 and PEP 750.
- Discussed the selection process for new RM.
- Discussed about a potential memory benchmarking page (e.g., memory.python.org), similar in concept to speed.python.org.
2025-06-26 PSC Meeting Summary
- Bi-weekly DiR Sync (with Petr as a quick guest!)
- The Stable ABI for Free Threading has been opened as an issue to C API WG. Will meet during office hours next week to discuss more thoroughly
- Łukasz has been helping with memory.python.org
- Łukasz and Deb informed the SC of a conduct report that is being handled by the PSF
- Finalized clarifications to the SC’s PEP 779 response, which can be seen here
2025-07-03 PSC Meeting Summary
- Office Hours with Petr Viktorin: reviewed the overarching ideas from the C API WG regarding the Stable ABI for Free Threading and Petr’s proposal, PEP 793
- Discussed the early success of memory.python.org and next steps for moving performance benchmarks to the new site, which would eventually become benchmarks.python.org
- Discussed ways to help the community remove blockers or settle areas struggling to reach consensus that aren’t PEP-sized
- Discussed plans for those attending EuroPython in a couple of weeks. Some SC members will attend and can have follow-up conversations in person.
2025-07-10 PSC Meeting Summary
- Reviewed recent core dev promotion feedback form responses and shared relevant feedback with mentors
- Discussed type checking and type annotations in the CPython repo; SC will draft a “state of the world” for current use of annotations in the stdlib
- Discussed PEP 782 (Add PyBytesWriter C API); SC will continue to discuss and research
- Discussed PEP 728 (TypedDict with Typed Extra Items); Typing Council approves, SC to review
- Discussed the state of the profiler modules in the stdlib. With the new sampling profiler coming, should we do a mini-reorg to create a stdlib package? Should profile.py be deprecated? It’s very old and extremely slow, but it does handle multiple threads.
- Discussed RM selection for 3.16 & 3.17. SC to contact the previous RMs for feedback.
2025-07-17 PSC Meeting Summary
- Only 3/5 attendance due to EuroPython; no decisions to be made until everyone returns
- Continued discussion on PEPs 728, 782, 545, 793
- Continued discussion regarding RM for 3.16 & 3.17
- Discussed problem with concurrent.futures.InterpreterPoolExecutor in 3.14 and whether it should be a blocker or just marked as experimental and fixed in 3.15; likely an RM decision
2025-07-24 PSC Meeting Summary
- DiR Update with Łukasz
- Removed the
cursesdependency from pyrepl, improving compatibility for Emscripten/Pyodide and FreeBSD. - Managed the 3.14.0rc1 Windows release with minimal infrastructure support.
- Unblocked iOS wheels after CFFI fixes (callbacks still unsupported on iOS).
- Continued progress toward enabling mobile Python apps.
- Discussed a potential JavaScript FFI PEP to unify approaches between Pyodide and MicroPython.
- Removed the
- Deb Sync
- Reminder to book travel early for the September Python Core Sprint (especially for PSF-funded participants).
- Suggested a finance review meeting with Phyllis.
- PEP Discussions
- Discussed feedback for PEP 728.
- Release Manager
- SC agreed with RMs that Savannah is a good choice as Release Manager for 3.16 & 3.17.
- Hugo to announce the decision.
2025-07-31 PSC Meeting Summary
- PEP Discussions
- Emma’s Promotion
- Approved with no objections; access updates completed.
- Sprints
- Emily to coordinate mentorship with Diego and Tania.
- DiR Sponsorship
- Discussed possible renewal of Bloomberg sponsorship for the Developer-in-Residence program.
2025-08-07 PSC Meeting Summary
- DiR Update with Łukasz
- cffi updated for free-threading and iOS support; version 2.0 coming soon.
- Emscripten buildbots are green (with many test skips).
- Released Python 3.13.6.
- Working on editing a new podcast episode.
- PEP Discussions
- Other
- Discussed how to handle community confusion about experimental projects, particularly after PEP 779 approval.
2025-08-14 PSC Meeting Summary
4 of 5 SC members met and discussed:
- PEP 728 – TypedDict with Typed Extra Items
- A reasonable response was received from the PEP author and its sponsor, resulting in a small PR to address previous discussions and improve the “How to Teach This” section.
- With this update, the SC approves PEP 728 after confirming with the missing SC member
- As a general note, the SC would like to encourage improvements to the online documentation, especially in areas of growing complexity such as typing
- PEP 793 – PyModExport: A new entry point for C extension modules
- The SC would like an explicit decision from the C API WG before proceeding
- Annual Report
- Emily met with Deb and Phyllis from the PSF to review the current budget. The Core Dev budget is healthy and has extended support for the 3 current developers-in-residence.
- There is an outstanding item on the balance sheet that must be reconciled by the PSF before a full report can be published.
- We discussed ensuring that we have tracking for all budget expenditures and how best to break down our “buckets” for reporting purposes.
- Mentorship Resources
- Emily is working with Tania and Diego to support Tania’s mentorship presentation at the upcoming Core Sprint. Information on our current areas that we want to approve along with past materials on workshops and surveys will be provided.
2025-08-21 PSC Meeting Summary
The SC met and discussed:
- DiR Update with Łukasz
- A macOS installer issue
- Work completed to fix a couple of bugs in mypy
- Plans for the Core Sprint
- PSF Update with Deb
- Guido has sent in a request for funding a translation platform – getting this on our radar for review
- We want to make sure that the Docs Editorial Board supports this, will check with them before approving funds
- Future support for the PSF for managing the CPython budget, possibly as a percentage of sponsorship received
- Discussed options and ideas for future Core Sprint funding to bring in more support, possibly as packages like how PyCon US structures sponsorships
- Checked in on current standing of machine support for KVM for Macs
- PEP 799 – A dedicated profiling package for organizing Python profiling tools
- PEP 793 – PyModExport: A new entry point for C extension modules
- The SC would like additional information from the C API WG for clarify, see the full message here
- PEP 782 – Add PyBytesWriter C API
- The SC gave preliminary approval and is drafting an acceptance
2025-08-28 PSC Meeting Summary
4 of 5 SC members met and discussed:
- Funding Transifex paid service for the docs translation team. The PSC is in favor of a one-year approval without commitment to ongoing funding. The PSC encourages translators to come up with an exit strategy if future costs get untenable. We recommend the documentation team check in with the PSF every year to see if the funding still makes sense.
- This sparked a larger discussion about the use of CPython development funds and the authority of the PSC to make decisions about how it should be spent.
- The PSC will present some slides at the September core sprint.
2025-09-04 PSC Meeting Summary
The SC met and discussed:
- PSF Update with Deb
- Deb suggested regular syncs between the PSF Board and PSC, 3-4 times per year. Some board members don’t know what the PSC does, and there are areas where responsibilities intersect. The idea is to start with general topics and then drill down as necessary.
- No DiR meeting this week
- PEPs
- PEP 782 – Add PyBytesWriter C API
- Draft a PEP acceptance response, and respond to a ping email from Victor.
- PEP 772 – Packaging Council governance process
- Board has signed off and the pypa-committers vote is open.
- PEP 765 – Disallow return/break/continue that exit a finally block
- If it’s to be reverted for 3.14, it must happen before 3.14.0 and the PSC would require at least one RC with the revert. RC 3 is currently scheduled for 9/12. Pablo will send a message “it’s now or never”. The PSC strongly encourages proponents of reverting the PEP to file an blocking issue with CPython and an issue on the PSC tracker. Both the PSC and the 3.14 RM must approve.
- Other
- Put together UK sprint slides from the PSC.
The PSC approves one year of Transifex paid service. Approval email was sent and it’s up to the PSF to take it from here. Any decisions about renewals are deferred to the future. - Discussed the creation of an “observability” working group, similar to the C API WG, which would coordinate future development of debugging and performance APIs.
- Check-in with Ee to make sure we’re on track for Bloc STAR voting for the next PSC election.
- Put together UK sprint slides from the PSC.
- PEP 782 – Add PyBytesWriter C API
2025-09-11 PSC Meeting Summary
- 3 of 5 SC members had a brief meeting and discussed the SC presentation at the Core Sprint and pending PEP responses.
2025-09-18 PSC Meeting Summary
- The SC did not have an official meeting, as 3 of 5 members were in-person at the Core Sprint. We met ad hoc to address potential release blockers/reversions. It was decided not to revert any changes; see the full statement here.
2025-09-25 PSC Meeting Summary
- The SC had office hours with Tal Einat to discuss updates to the deferred PEP 661 (Sentinel values)
- Current state: Registry idea dropped; using standard pickling/unpickling. Reference implementation to be updated.
- Main open issue: Truthiness/boolean behavior — some want configurability, but most favor simplicity. This is been a surprisingly contentious topic!
- Discussion Consensus:
- Keep sentinels always truthy (matches standard library usage).
- Avoid making
boolconfigurable or allowing subclassing. - Simpler design preferred.
- Reference in Python for now; final version to be in C if accepted.
- The SC met and discussed:
- Publishing SC Minutes and Summaries
- Discussed making summaries more public, as the current summaries on Discord updates are well-received but not widely shared.
- Discussed if we could we easily expose a public agenda in Notion.
- PEP 793 – (PyModExport: A new entry point for C extension modules)
- The C API WG has posted a response: 4 for, 1 against, 1 abstain.
- Discussion Consensus:
- We don’t want to hold up work. Our goal: ensure this is going in the right direction.
- We request gathering broader input (bindings, pydantic, other core devs) – contacted Petr to gather more feedback.
- Voting provider for the upcoming SC election in November/December
- We have been in touch with Ee. The provider looks good; one non-critical missing feature expected to be ready before the election.
- Core Sprint Hosting
- Discussed possibly formalizing host selection via an RFP.
- Pondering if we want a more deliberate rotation between cities.
- Publishing SC Minutes and Summaries
2025-10-02 PSC Meeting Summary
- The SC met and discussed:
- Follow up around the already implemented and released PEP 765 (Disallow return/break/continue that exit a finally block) behavior change – Drafting an official PSC reply in the thread in support of Irit and the PEP.
- Started our discussions about PEP 791 (math.integer — submodule for integer-specific mathematics functions).
- Took our initial look at PEP 679 (New assert statement syntax with parentheses) – todo – read up and discuss more later.
- Discussed ensuring the SC election process kicks off and if that alters which decisions we prioritize now or leave to the next SC.
2025-10-09 PSC Meeting Summary
- The SC met and discussed:
- Learnings and possible improvements to the PEP process based on the PEP 765 ”return in finally” decision and post-announcement discussion.
- PEP 791 “intmath” and decided that
math.integerwould be the name. With that, the PEP was accepted, with notification to be drafted. - Regarding alternating sprints between PyCon US and EuroPython, we decided to draft a non-binding vote to gauge the temperature on DPO.
- Still awaiting responses to PSC feedback on PEP 793 “PyModExport”.
- Decided on dates for upcoming 2026 term PSC elections, with email to Ee requesting PEP 8107 drafting and election administration.
- Mark the request to elevate Raspberry PI ARM64 to Tier 3 as approved.
2025-10-16 PSC Meeting Summary
- The SC had a sync meeting with DiR.
- Shared updates on the editing progress regarding core.py during the Cambridge Core Sprint.
- Shared the preparation for removing Python 3.9 from buildbots.
- Talked about Python 3.14 release.
- The SC also discussed our response to PEP 765 (Disallow return/break/continue that exit a finally block).
2025-10-23 PSC Meeting Summary
- The SC met and discussed:
- PEP 679 (New assert statement syntax with parentheses), deciding to reject since there is now a warning about this common mistake, which is good enough
- PEP 798 (Unpacking in Comprehensions) extensively, with many questions, concerns, and feedback brought up. Deferred decision until next week.
- We accepted PEP 793 (PyModExport: A new entry point for C extension modules)
- We agreed on the format of questions for the Language Summit alternation poll.
2025-10-30 PSC Meeting Summary
- The SC met with the DiR, Łukasz, and discussed:
- The last ever 3.9! 3.9.25 will be released shortly, covering a few minor issues (no major security issues).
- Re-writing speed.python.org
- Brief planning and logistic questions around future conferences and locations related to core sprint plans
- The SC also discussed:
- PEP 798 (Unpacking in Comprehensions)
- We received feedback from the community and did some internal polls with colleagues
- Overall, we are positive on the PEP but want to ensure that the waters are not muddied with
yield from - Decided to accept the PEP without
yield from, and will release an acceptance statement
- PEP 810 (Explicit lazy imports)
- Discussed and landed on
lazyfor the keyword after entertaining multiple options - Decided to accept the PEP, and will release an acceptance statement
Brief discussion on possible improvements to the PEP process, more to come!
- Discussed and landed on
- PEP 798 (Unpacking in Comprehensions)
2025-11-06 PSC Meeting Summary
- The SC held office hours with Emma Smith.
- The main discussion focused on ways to ensure that DPO discussions stay on track, respectful, and productive, and how the SC can step in early to settle sub-topics and questions, avoiding long, difficult to follow threads.
- SC also discussed several possible action items, such as encouraging PEP authors to start new threads at key milestone changes and providing early feedback from the SC, among others.
- The SC met and discussed:
- SC decided to remove commit privileges from inactive core developers.
- The decision was made with security concerns in mind, and the SC noted that commit access can be restored at any time upon request.
- SC also finalized the summary and publication of the meeting notes.
2025-11-20 PSC Meeting Summary
- The SC held office hours with Petr Viktorin to discuss various details around CPython development.
- Continued to have discussions about benchmarking hardware, the budget, and where to house the machines
2025-12-04 PSC Meeting Summary
- Chatted about election in progress, upcoming hand-off stuff.
- Discussed if docs translations make sense in the GitHub python org. sc#322
- Discussed the creation of a
Platforms/dir in the cpython repo and what should move in and when. sc#317 - Discussed two WASI questions from Brett Cannon:
- Discussed Petr’s “friendlier unsupported platforms” PEP11 sc#324
- Discussed the PSRT definition PEP 811 from Seth – accepted!
- Discussed the Language Summit 2026 location.
- We want it to support moving it to EuroPython for 2026, but caveats still in flight are: locale and venue are still not defined.
- Requirements we haven’t codified and thus communicated directly to EuPy folks yet have themes of:
- We’re holding up PyCon US planning with the unknown, global international travel friendly locale needs
- Lead time for travel Visas
- PSF grant funding expense requirements
- So TBD w/
– goal know for sure by late Dec/early Jan. - It’d be a shame to need to make 2027 be the first alternating year.
- Emily, Hugo, & Lukasz in contact with Ege & EU folks.
- Discussed some Azure credits sent Python’s way from Ezio
- Passed those on to Ee and Lukasz to consider.
2025-12-11 PSC Meeting Summary
- Synced up with Developer in Residence Łukasz & PSF executive director Deb.
- Discussed macOS releases on GHA, buildbot flakiness, pyrepl “security” report.
- PSF staff, DiRs, core dev funding discussions
- Checked in on 2026 Language Summit decision
- Venue not quite settled
- We have timelines for travel grants & visas, etc.
- Conversations in progress. Should have EuroPy details very soon.
- Office hours with Filipe (yay office hours in use, welcome!)
- Discussed PEP 739 build_details.json and the overlapping installations support question.
- Election ends in a couple days, next week: a handoff meeting
2025-12-18 PSC Meeting Summary
- The 2025 and 2026 SCs met and discussed handoff procedures, outstanding issues, and general knowledge transfer. The 2026 SC will reconvene in January.
2 posts – 1 participant
]]>This presents an interesting compatibility challenge for Python packaging standards, as I don’t think any of the PEPs that introduce toml based files (pyproject.toml and pylock.toml) specify which version of TOML should be used.
I assume that at some point TOML 1.1 will be added to tomli and then those changes will be upstreamed to the standard library, but probably not back-ported.
My question here is what would be a good strategy for adoption? My preference is to adopt reading it as quickly as possible, but don’t specially call it out, e.g. not having something in the changelog like”now supporting TOML 1.1″, and if possible emit a user warning when 1.1 specific features are used, at least for a few years.
But I’m sure others have opinions or perhaps even experiencing in adopting new format versions.
23 posts – 8 participants
]]>_suggestion to give the suggestions for exception when error occurred if possible in traceback.py. I think that this module can be public: this module was wrote in c language so that it is faster than difflib, and programmer can use the module to give the suggestion to user faster and more exactly.
The required arguments I think that needed are:
wrong_name: the name that is wrong. Must bestr.possible_list: the names which are possible. Must belist[str].max_list_length: the max length ofpossible_list. If the length ofpossible_listis beyond of themax_list_lengthit return None. Must beintand default 750.max_string_length: the max length of thewrong_name. If the length ofwrong_nameis beyond of themax_string_lengthit return None. Must beintand default 40.
If there is a possible name found in possible_list, the function will return the possible name. Otherwise it returns None.
The module suggestion is a python script. First it tried to import the module _suggestion. If it failed it use python implemention (just like traceback.py).
2 posts – 2 participants
]]>Hello, in this code I just made a list node and then tried to print all its elements, but instead the code prints the location of those elements. What is the fault? How do I code so that the exact element gets printed instead of its location
5 posts – 3 participants
]]>dataclass fields, or __annotations__ in general. In my particular case, I’m using dataclasses to represent rows in a DynamoDB table, but DynamoDB doesn’t support every type that Python does, only these:
DynamoType = (
bytes
| bytearray
| str
| int
| Decimal
| bool
| set[int]
| set[Decimal]
| set[str]
| set[bytes]
| set[bytearray]
| Sequence["DynamoType"]
| Mapping[str, "DynamoType"]
| None
)
I’d like for the type checker to catch invalid usages in a dataclass definition. For example, assuming there were some DynamoModel that implemented this functionality, I could do the following:
@dataclass
class SomeModel(DynamoModel):
good_field: bool
bad_field: float # should error
But this seems impossible. My next idea was to try to make some parametrized type that would work like Sequence:
T = TypeVar("T", bound=DynamoType)
DynamoSequence = Sequence[T]
@dataclass
class SomeModel:
good_field: DynamoSequence[bool]
bad_field: DynamoSequence[float] # type checker error
But I still want type checkers to treat it as the underlying type when I construct an instance or reference a member field, so I came up with the following hack, abusing Annotated:
_DynamoField = TypeVar("_DynamoField", bound=DynamoType)
DynamoField = Annotated[_DynamoField, _DynamoField]
Now, as long as I wrap every field in DynamoField[...], mypy and pyright both recursively verify the types and catch errors, as desired.
Evidently, this problem is easy to solve at runtime by inspecting __annotations__. Is there a principled way to do this statically, or have I just run into a limitation of the type system?
3 posts – 3 participants
]]>#steering-council-comms channel on the Python Core Team Discord server, and sharing the summaries in a more timely manner.
Without further ado:
- This was the first meeting of the new year and with just the 2026 SC members.
- We discussed general meeting logistics, confirmed that the new time works for everyone, and had a general tour of Notion. We also had some additional housekeeping for access to GitHub repos and other resources following the transition to the 2026 SC.
- We agreed that we will prioritize PEPs for following week reading and discussion, and publish our planned agenda for visibility.
- We agreed to publishing on the DPO Committers category for better transparency. This category can be read by anyone and commented on by the Core Team. We are looking into automatically mirroring that to the private
#steering-council-commschannel on the Core Team Discord (for now, we’ll mirror manually). - We plan on publishing a 2025 PSC wrap-up summary to the Inquisition category on DPO.
- We will share the updated SC Office Hours schedule and sign-up calendar. (Done here).
- We closed a carry-over topic from 2025 regarding free-threading docs, since that has already been resolved by the FT team.
- We had a lengthy discussion about various project process issues, and plan on continuing to improve various aspects about the Python development process, including ensuring that project policies are well-understood by the Core Team.
- Our PEP agenda for next week, in rough priority order, and time permitting:
1 post – 1 participant
]]>charset_normalizer.md, native wheels provide it as both
- a native extension, e.g. charset_normalizer/md.cpython-314-darwin.so
- a pure python file, charset_normalizer/md.py
Is this an intended/documented way to ship an extension module with a pure python fallback, or a quirk of one project’s packaging that I’ve read too much into?
Python’s importlib appears to transparently use the native extension if it’s compatible (by platform, major.minor, …) and fallback to the pure python if necessary. There’s no explicit try: import …; except ImportError: import …_fallback as … in charset_normalizer that I could spot.
For example if I manually create a situation where the .so is compiled for a non-matching Python version (3.14 vs 3.13) then the import succeeds without complaint/warning, using the pure Python
alex@d13:~$ unzip charset_normalizer-3.4.4-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl
...
inflating: charset_normalizer/md.cpython-314-x86_64-linux-gnu.so
inflating: charset_normalizer/md.py
inflating: charset_normalizer/md__mypyc.cpython-314-x86_64-linux-gnu.so
...
alex@d13:~$ python3
Python 3.13.5 (main, Jun 25 2025, 18:55:22) [GCC 14.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import charset_normalizer.md
>>> charset_normalizer.md
<module 'charset_normalizer.md' from '/home/alex/charset_normalizer/md.py'>
Context: I’m investigating fixes for RequestsDependencyWarning: Unable to find acceptable character detection dependency (chardet or charset_normalizer). · Issue #1405 · mitogen-hq/mitogen · GitHub and might end up using this behaviour in a fix. Mitogen serves pure python modules over the wire to child processes as they import them.
4 posts – 3 participants
]]>This could allow not only avoiding allocating more memory than necessary, but also changing LOAD_GLOBAL calls to LOAD_CONST.
Example:
PI = 3.14
def calculate(radius):
return radius * PI
dis.dis(calculate)
This displays:
5 LOAD_FAST 0 (radius)
LOAD_GLOBAL 0 (PI)
BINARY_OP 5 (*)
RETURN_VALUE
But if PI were made constant, PI could be replaced with its constant value, and then it would display:
5 LOAD_FAST 0 (radius)
LOAD_CONST 1 (3.14)
BINARY_OP 5 (*)
RETURN_VALUE
I’d like to hear your opinions.
9 posts – 7 participants
]]>Windows 10, Python 3.14.2, certificate version: 2026.1.4
File "urllib\request.py", line 1319, in do_open
File "http\client.py", line 1338, in request
File "http\client.py", line 1384, in _send_request
File "http\client.py", line 1333, in endheaders
File "http\client.py", line 1093, in _send_output
File "http\client.py", line 1037, in send
File "http\client.py", line 1479, in connect
File "ssl.py", line 455, in wrap_socket
File "ssl.py", line 1076, in _create
File "ssl.py", line 1372, in do_handshake
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1032)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "riv_launcher.py", line 583, in <module>
File "flet\utils\deprecated.py", line 40, in wrapper
File "flet\app.py", line 43, in app
File "flet\app.py", line 96, in run
File "asyncio\runners.py", line 195, in run
File "asyncio\runners.py", line 118, in run
File "asyncio\base_events.py", line 725, in run_until_complete
File "flet\app.py", line 228, in run_async
File "flet_desktop\__init__.py", line 39, in open_flet_view_async
File "flet_desktop\__init__.py", line 97, in __locate_and_unpack_flet_view
File "flet_desktop\__init__.py", line 218, in __download_flet_client
File "urllib\request.py", line 214, in urlretrieve
File "urllib\request.py", line 189, in urlopen
File "urllib\request.py", line 489, in open
File "urllib\request.py", line 506, in _open
File "urllib\request.py", line 466, in _call_chain
File "urllib\request.py", line 1367, in https_open
File "urllib\request.py", line 1322, in do_open
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1032)>
2 posts – 2 participants
]]>--dir no longer works with 3.14.1. Not knowing what else to do, I filed [WASI-SDK] CPython 3.14.1 can no longer find systemroot in wasi-libc build (3.13.11 works) · Issue #143537 · python/cpython · GitHub with CPython.
$ cd ~/python-3.14.2-wasi_sdk-24/
$ wasmtime --dir . python.wasm
Could not find platform independent libraries <prefix>
Could not find platform dependent libraries <exec_prefix>
Fatal Python error: Failed to import encodings module
Python runtime state: core initialized
ModuleNotFoundError: No module named 'encodings'
Current thread 0x012c0ae8 (most recent call first):
<no Python frame>
I tested it with 3.13.11 and it indeed works, as described earlier in the thread.
2 posts – 2 participants
]]>In the current spec: Special types in annotations — typing documentation
Any other special constructs like
tupleorCallableare not allowed as an argument totype.
This is in contradiction with the conformance test suite, which requires tuple to be allowed as an argument to type.
I propose to change the sentence to the following:
Any other special forms like
Callableare not allowed as an argument totype.
The proposed change moves away from the ambiguous term “special constructs“ to the terminology “special form“ (with a link to glossary).
6 posts – 3 participants
]]>I commited a rant on dependency conflicts that gives the correct solution.
But it lacks of respect and Python is like any other language on this subject.
In case some people here like to laugh, they may find it interesting.
Best regards,
Laurent Lyaudet
13 posts – 4 participants
]]>I’m opening this to raise awareness of a PPO change I proposed a few weeks ago, here:
TL;DR: .pypirc is the file format that twine (and others read) to access index configuration and credentials. It was never standardized by a PEP, so it’s one of the standards on PPO that sort of exists in a grandfathered state.
I’ve proposed a small change to the format, to stipulate that .pypirc SHOULD always be UTF-8 encoded, although tools MAY handle other encodings. This merely codifies the status quo, but it was implicit rather than explicit before.
I can’t imagine this would be a very controversial change, but I wanted to raise awareness here rather than merging it directly, since it exists in a weird gray area around pre-PyPA packaging standards. I welcome any thoughts/feedback!
(For background context, see the linked Twine PR in that PPO PR. The TL;DR is that some users end up with differently encoded .pypirc files through no fault of their own, and there’s no clear decision procedure for how Twine or other tools should handle that case.)
1 post – 1 participant
]]>The code runs fine, but when I run pylint I get these error messages.
Can anyone explain why?
$ pylint read_jpeg.py
************* Module read_jpeg
read_jpeg.py:12:10: E1101: Module ‘cv2’ has no ‘imread’ member (no-member)
read_jpeg.py:25:15: E1101: Module ‘cv2’ has no ‘cvtColor’ member (no-member)
read_jpeg.py:25:33: E1101: Module ‘cv2’ has no ‘COLOR_BGR2GRAY’ member (no-member)
read_jpeg.py:42:0: E1101: Module ‘cv2’ has no ‘imwrite’ member (no-member)
read_jpeg.py:44:0: E1101: Module ‘cv2’ has no ‘destroyAllWindows’ member (no-member)
3 posts – 2 participants
]]>re.Match would copy it’s data from source so that if the source changed the match would not:
import re
data = bytearray(b'1234567890')
pattern = re.compile(b'3')
match = pattern.search(data)
del data[0]
print(match.group(0))
$ python example.py # Should print b'3'
b'4'
I’m not saying I’m right or wrong for using re like this. Just that it’s surprising and mutable sources do seem to be supported (and mypy is happy)
7 posts – 5 participants
]]>Currently in the I/O stack when TextIOWrapper needs to be convert from str to bytes it calls an encoder which encodes the data and returns a bytes[1]. With utf-8 mode it is common that the output stream (ex. sys.stdout) is using utf-8 encoding and the str (unicodeobject) contains utf-8 encoded data so the only work during encoding is allocating a bytes and copying across the data[2][3].
Proposal
I’d like to add a new method of encoding a str that can return any buffer protocol supporting object, such as a memoryview of the underlying bytes data, avoiding the bytes allocation and copy. While it is possible to special case this in TextIOWrapper I think this is common enough of a need to be worth having a more optimized option generally available.
Prior work
- Previous C API proposal: Better API for encoding unicode objects with UTF-8
a. Why this avoids the issue that encountered: This proposal is focused on one specific use case where the copy is a measurable percentage of runtime. Writing Unicode, especially emoji
, to sys.stdoutis an increasingly common operation. - Without Python UTF8 mode (PEP 540) becoming the default (PEP 686) this would be much more difficult. Thanks for all the work to enable it!
Explored alternatives to adding a new method
- Change the
.encode()signature to returnbytes-like.memoryviewis sufficiently distinct frombytesthat I think this would create a lot of slightly broken code. I have previously atttempted a similar change betweenbytesandbytearrayand found a lot of compatibility issues. Type checkers could aid with this migration but being compatible would mean adding abytes()call or a conditionalbytescall which to me just moves the complexity to every function which callsstr.encode; It does not remove it. - Adding a new flag keyword argument. To me this would result in a cleaner API but I think it would break too many exiting
.encodefunctions. Thecodecsand.encodeAPIs have been around a long time and have many custom implementations which are unlikely to handle arbitrary kwargs gracefully. - Include a default implementation of the new method which falls back to the copy version. Low cost but makes it harder for to see if a specific encoder has been updated for the new API.
getattrwith a default provides a simple way to do this.
ASCII-only data has a no-copy fast path. cpython/Modules/_io/textio.c at 96ab379dcaa93630a230402b8183a26ac99097bd · python/cpython · GitHub ↩︎
cpython/Objects/unicodeobject.c at 96ab379dcaa93630a230402b8183a26ac99097bd · python/cpython · GitHub ↩︎
TextIOWrapper may translate the newlines which takes additional separate work. ↩︎
4 posts – 3 participants
]]>This is the Native Messaging protocol
Native messaging protocol (Chrome Developers)
Chrome starts each native messaging host in a separate process and communicates with it using standard input (
stdin) and standard output (stdout). The same format is used to send messages in both directions; each message is serialized using JSON, UTF-8 encoded and is preceded with 32-bit message length in native byte order. The maximum size of a single message from the native messaging host is 1 MB, mainly to protect Chrome from misbehaving native applications. The maximum size of the message sent to the native messaging host is 64 MiB.
So far I have completed 64 MiB processing support for Node.js (the same code can be used for Deno and Bun), QuickJS, Bytecode Alliance’s Javy, AssemblyScript, Rust.
I still have C, C++, Static Hermes, Bash, V8, SpiderMonkey, Amazon Web Services LLRT, txiki.js, and Python to do to complete 64 MiB support for the languages, engines, runtimes I’ve written that currently have 1 MiB support.
I started rewriting the algorithm parsing the JSON manually (encoded in u8) in AssemblyScript because there’s no JSON global object. I then ported that algorithm to QuickJS. Somewhere along the way somebody on Discord posted the algorithm I wrote to Google’s Gemini program. The performance improvements to the algorithm are clearly measurable using the code Gemini program spit out.
Instead of asking Google Gemini to spit out the algorithm I wrote “optimized“ in Python, I’ll ask the humans who write Python, first.
Here’s what I have right now, that only handles 1 MiB of input
#!/usr/bin/env -S python3 -u
# https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/Native_messaging
# https://github.com/mdn/webextensions-examples/pull/157
# Note that running python with the `-u` flag is required on Windows,
# in order to ensure that stdin and stdout are opened in binary, rather
# than text, mode.
import sys
import json
import struct
import traceback
try:
# Python 3.x version
# Read a message from stdin and decode it.
def getMessage():
rawLength = sys.stdin.buffer.read(4)
# if len(rawLength) == 0:
# sys.exit(0)
messageLength = struct.unpack('@I', rawLength)[0]
message = sys.stdin.buffer.read(messageLength).decode('utf-8')
return json.loads(message)
# Encode a message for transmission,
# given its content.
def encodeMessage(messageContent):
# https://stackoverflow.com/a/56563264
# https://docs.python.org/3/library/json.html#basic-usage
# To get the most compact JSON representation, you should specify
# (',', ':') to eliminate whitespace.
encodedContent = json.dumps(messageContent, separators=(',', ':')).encode('utf-8')
encodedLength = struct.pack('@I', len(encodedContent))
return {'length': encodedLength, 'content': encodedContent}
# Send an encoded message to stdout
def sendMessage(encodedMessage):
sys.stdout.buffer.write(encodedMessage['length'])
sys.stdout.buffer.write(encodedMessage['content'])
sys.stdout.buffer.flush()
while True:
receivedMessage = getMessage()
sendMessage(encodeMessage(receivedMessage))
except Exception as e:
sys.stdout.buffer.flush()
sys.stdin.buffer.flush()
# https://discuss.python.org/t/how-to-read-1mb-of-input-from-stdin/22534/14
with open('nm_python.log', 'w', encoding='utf-8') as f:
traceback.print_exc(file=f)
sys.exit(0)
This is what I do in QuickJS – including the “optimization“ that Google Gemini spit out
#!/usr/bin/env -S /home/user/bin/qjs -m --std
// QuickJS Native Messaging host
// guest271314, 5-6-2022
function getMessage() {
const header = new Uint32Array(1);
std.in.read(header.buffer, 0, 4);
const output = new Uint8Array(header[0]);
const len = std.in.read(output.buffer, 0, output.length);
return output;
}
//
function sendMessage(message) {
// Constants for readability
const COMMA = 44;
const OPEN_BRACKET = 91; // [
const CLOSE_BRACKET = 93; // ]
const CHUNK_SIZE = 1024 * 1024; // 1MB
// If small enough, send directly (Native endianness handling recommended)
if (message.length <= CHUNK_SIZE) {
const header = new Uint8Array(4);
header[0] = (message.length >> 0) & 0xff;
header[1] = (message.length >> 8) & 0xff;
header[2] = (message.length >> 16) & 0xff;
header[3] = (message.length >> 24) & 0xff;
// Two writes are often better than allocating a new joined buffer
// if the engine supports it. If not, combine them.
const output = new Uint8Array(4 + data.length);
output.set(header, 0);
output.set(data, 4);
std.out.write(output.buffer, 0, output.length);
std.out.flush();
return;
}
let index = 0;
// Iterate through the message until we reach the end
while (index < message.length) {
let splitIndex;
// 1. Determine where to cut the chunk
// Try to jump forward 1MB
let searchStart = index + CHUNK_SIZE - 8;
if (searchStart >= message.length) {
// We are near the end, take everything remaining
splitIndex = message.length;
} else {
// Find the next safe comma to split on
splitIndex = message.indexOf(COMMA, searchStart);
if (splitIndex === -1) {
splitIndex = message.length; // No more commas, take the rest
}
}
// 2. Extract the raw chunk (No copy yet, just a view)
const rawChunk = message.subarray(index, splitIndex);
// 3. Prepare the final payload buffer
// We calculate size first to allocate exactly once per chunk
const startByte = rawChunk[0];
const endByte = rawChunk[rawChunk.length - 1];
let prepend = null;
let append = null;
// Logic to ensure every chunk is a valid JSON array [...]
// Case A: Starts with '[' (First chunk), needs ']' at end if not present
if (startByte === OPEN_BRACKET && endByte !== CLOSE_BRACKET) {
append = CLOSE_BRACKET;
} // Case B: Starts with ',' (Middle chunks), needs '[' at start
else if (startByte === COMMA) {
prepend = OPEN_BRACKET;
// If it doesn't end with ']', it needs one
if (endByte !== CLOSE_BRACKET) {
append = CLOSE_BRACKET;
}
// Note: We skip the leading comma in the raw copy later by offsetting
}
// 4. Construct the output buffer
// Calculate final length: Header (4) + (Prepend?) + Body + (Append?)
// Note: If startByte was COMMA, we usually want to overwrite it with '[',
// but your original logic kept the comma data or shifted.
// Standard approach:
// If raw starts with comma, we replace comma with '[' or insert '['?
// Your logic: Replaced [0] if it was comma.
// Optimized construction based on your logic pattern:
let bodyLength = rawChunk.length;
let payloadOffset = 4; // Start after 4-byte header
// Adjust sizes based on brackets
const hasPrepend = prepend !== null;
const hasAppend = append !== null;
// Special handling for the "Comma Start" case to match your logic:
// Your logic: x[0] = 91; x[i] = data[i]. Effectively replaces comma with '['
let sourceOffset = 0;
if (startByte === COMMA) {
sourceOffset = 1; // Skip the comma from source
bodyLength -= 1; // Reduce source len
// We implicitly assume we prepend '[' in this slot
}
const totalLength = 4 + (hasPrepend ? 1 : 0) + bodyLength +
(hasAppend ? 1 : 0);
const output = new Uint8Array(totalLength);
// Write Length Header (Little Endian example)
const dataLen = totalLength - 4;
output[0] = (dataLen >> 0) & 0xff;
output[1] = (dataLen >> 8) & 0xff;
output[2] = (dataLen >> 16) & 0xff;
output[3] = (dataLen >> 24) & 0xff;
// Write Prepend (e.g. '[')
let cursor = 4;
if (hasPrepend) {
output[cursor] = prepend;
cursor++;
} else if (startByte === COMMA) {
// If we didn't flag prepend but stripped comma, likely need bracket
// Based on your specific logic "x[0] = 91", we treat that as a prepend
output[cursor] = OPEN_BRACKET;
cursor++;
}
// Write Body (Fast copy)
// We use .set() which is much faster than a loop
output.set(rawChunk.subarray(sourceOffset), cursor);
cursor += bodyLength;
// Write Append (e.g. ']')
if (hasAppend) {
output[cursor] = append;
}
// 5. Send immediately
std.out.write(output.buffer, 0, output.length);
std.out.flush();
// Force GC only occasionally if needed (every chunk is often too frequent)
std.gc();
// Move index for next iteration
index = splitIndex;
}
}
function main() {
while (true) {
const message = getMessage();
sendMessage(message);
}
}
try {
main();
} catch (e) {
// std.writeFile(“err.txt”, e.message);
std.exit(0);
}
which is based on this code I wrote in QuickJS
function sendMessage(message) {
if (message.length > 1024 ** 2) {
const json = message;
const data = new Array();
let fromIndex = 1024 ** 2 - 8;
let index = 0;
let i = 0;
do {
i = json.indexOf(44, fromIndex);
const arr = json.subarray(index, i);
data.push(arr);
index = i;
fromIndex += 1024 ** 2 - 8;
} while (fromIndex < json.length);
if (index < json.length) {
data.push(json.subarray(index));
}
for (let j = 0; j < data.length; j++) {
const start = data[j][0];
const end = data[j][data[j].length - 1];
if (start === 91 && end !== 44 && end !== 93) {
const x = new Uint8Array(data[j].length + 1);
for (let i2 = 0; i2 < data[j].length; i2++) {
x[i2] = data[j][i2];
}
x[x.length - 1] = 93;
data[j] = x;
}
if (start === 44 && end !== 93) {
const x = new Uint8Array(data[j].length + 1);
x[0] = 91;
for (let i2 = 1; i2 < data[j].length; i2++) {
x[i2] = data[j][i2];
}
x[x.length - 1] = 93;
data[j] = x;
}
if (start === 44 && end === 93) {
const x = new Uint8Array(data[j].length);
x[0] = 91;
for (let i2 = 1; i2 < data[j].length; i2++) {
x[i2] = data[j][i2];
}
data[j] = x;
}
}
for (let k = 0; k < data.length; k++) {
const arr = data[k];
const header = Uint32Array.from(
{
length: 4,
},
(_, index) => (arr.length >> (index * 8)) & 0xff,
);
const output = new Uint8Array(header.length + arr.length);
output.set(header, 0);
output.set(arr, 4);
std.out.write(output.buffer, 0, output.length);
std.out.flush();
std.gc();
}
} else {
const header = Uint32Array.from({
length: 4,
}, (_, index) => (message.length >> (index * 8)) & 0xff);
const output = new Uint8Array(header.length + message.length);
output.set(header, 0);
output.set(message, 4);
std.out.write(output.buffer, 0, output.length);
std.out.flush();
std.gc();
}
}
How would you write the above algorithm in Python?
9 posts – 2 participants
]]>The Steering Council is now holding weekly meetings, so you can book office hours with us here.
Our current meeting slot is Tuesdays at 1:00 PM PST (you can see your local time here), with office hours every other week.
For more details, see the Office Hours section of the Steering Council README.
Warm regards,
Donghee
on behalf of the Python Steering Council
5 posts – 2 participants
]]>If it is to format Python source code, there is a bug. Because it mixes two lines, a syntax error occurs when it runs.
If the purpose is only to format the docstring (no executable source code), the doc should make this clear.
3 posts – 2 participants
]]>A hard requirement is that plugins can be hot reloaded both during development and while the app is running, without restarting the main application.
It it aimed to be something like VSCode/Obsidian etc. where users will be able to add UI elements as well.
What are my options.
5 posts – 4 participants
]]>Abstract
This PEP proposes introducing JSON encoded core metadata and wheel file format metadata files in Python packages. Python package metadata (“core metadata”) was first defined in PEP 241 to use RFC 822 email headers to encode information about packages. This was reasonable in 2001; email messages were the only widely used, standardized text format that had a parser in the standard library. However, issues with handling different encodings, differing handling of line breaks, and other differences between implementations have caused numerous packaging bugs. Using the JSON format for encoding metadata files would eliminate a wide range of these potential issues.
The full PEP text is published: PEP 819 – JSON Package Metadata | peps.python.org
Interested to see what folks think!
19 posts – 7 participants
]]>In this example, I’m using Foo.__call__ as the decorator which adds MyBase.
from dataclasses import asdict, dataclass
from typing import dataclass_transform, TYPE_CHECKING
if TYPE_CHECKING:
from _typeshed import DataclassInstance
else:
DataclassInstance = object
class MyBase(DataclassInstance):
@classmethod
def lorem(cls, *args, **kwargs):
return cls(*args, **kwargs)
def ipsum(self):
return asdict(self)
class Foo:
@dataclass_transform()
def __call__(self, cls, **kwargs): # -> ???
def wrap(cls):
if MyBase not in cls.__bases__:
cls = type(cls.__name__, (MyBase,) + cls.__bases__, dict(cls.__dict__))
return dataclass(cls, **kwargs)
if cls is None:
return wrap # @foo(...)
return wrap(cls) # @foo
foo = Foo()
@foo
class Struct:
bar: str
baz: int
s = Struct(bar="yo", baz="something")
print(s)
s = Struct("yo", "something")
print(s)
s = Struct.lorem(bar="yo", baz="something")
print(s)
print(s.ipsum())
this runs fine:
Struct(bar='yo', baz='something')
Struct(bar='yo', baz='something')
Struct(bar='yo', baz='something')
{'bar': 'yo', 'baz': 'something'}
but type checkers can’t figure out that Struct has MyBase as a parent and is a dataclass:
$ mypy test.py
test.py:38: error: Unexpected keyword argument "bar" for "Struct" [call-arg]
/usr/lib/python3.14/site-packages/mypy/typeshed/stdlib/builtins.pyi:101: note: "Struct" defined here
test.py:38: error: Unexpected keyword argument "baz" for "Struct" [call-arg]
/usr/lib/python3.14/site-packages/mypy/typeshed/stdlib/builtins.pyi:101: note: "Struct" defined here
test.py:42: error: Too many arguments for "Struct" [call-arg]
test.py:46: error: "type[Struct]" has no attribute "lorem" [attr-defined]
test.py:50: error: "Struct" has no attribute "ipsum" [attr-defined]
Found 5 errors in 1 file (checked 1 source file)
$ ty check test.py
error[unknown-argument]: Argument `bar` does not match any known parameter of bound method `__init__`
--> test.py:38:12
|
36 | # No parameter named "bar"
37 | # No parameter named "baz"
38 | s = Struct(bar="yo", baz="something")
| ^^^^^^^^
39 | print(s)
|
info: Method signature here
--> stdlib/builtins.pyi:136:9
|
134 | @__class__.setter
135 | def __class__(self, type: type[Self], /) -> None: ...
136 | def __init__(self) -> None: ...
| ^^^^^^^^^^^^^^^^^^^^^^
137 | def __new__(cls) -> Self: ...
138 | # N.B. `object.__setattr__` and `object.__delattr__` are heavily special-cased by type checkers.
|
info: rule `unknown-argument` is enabled by default
error[unknown-argument]: Argument `baz` does not match any known parameter of bound method `__init__`
--> test.py:38:22
|
36 | # No parameter named "bar"
37 | # No parameter named "baz"
38 | s = Struct(bar="yo", baz="something")
| ^^^^^^^^^^^^^^^
39 | print(s)
|
info: Method signature here
--> stdlib/builtins.pyi:136:9
|
134 | @__class__.setter
135 | def __class__(self, type: type[Self], /) -> None: ...
136 | def __init__(self) -> None: ...
| ^^^^^^^^^^^^^^^^^^^^^^
137 | def __new__(cls) -> Self: ...
138 | # N.B. `object.__setattr__` and `object.__delattr__` are heavily special-cased by type checkers.
|
info: rule `unknown-argument` is enabled by default
error[too-many-positional-arguments]: Too many positional arguments to bound method `__init__`: expected 1, got 3
--> test.py:42:12
|
41 | # Expected 0 positional arguments
42 | s = Struct("yo", "something")
| ^^^^
43 | print(s)
|
info: Method signature here
--> stdlib/builtins.pyi:136:9
|
134 | @__class__.setter
135 | def __class__(self, type: type[Self], /) -> None: ...
136 | def __init__(self) -> None: ...
| ^^^^^^^^^^^^^^^^^^^^^^
137 | def __new__(cls) -> Self: ...
138 | # N.B. `object.__setattr__` and `object.__delattr__` are heavily special-cased by type checkers.
|
info: rule `too-many-positional-arguments` is enabled by default
error[unresolved-attribute]: Class `Struct` has no attribute `lorem`
--> test.py:46:5
|
45 | # Cannot access attribute "lorem" for class "type[Struct]"
46 | s = Struct.lorem(bar="yo", baz="something")
| ^^^^^^^^^^^^
47 | print(s)
|
info: rule `unresolved-attribute` is enabled by default
Found 4 diagnostics
What can I do to help type checkers understand this?
6 posts – 4 participants
]]>3 posts – 2 participants
]]>I (or to be precise the SageMath project) have a Cython-implemented Integer class that behaves like an int and is intended to be usable wherever an int is accepted, but it does not actually inherit from int. Declaring it as class Integer(int) in a .pyi stub would therefore be dishonest.
I have considered several options, none of which feel entirely satisfactory:
-
Using
SupportsInt/SupportsIndexis honest, but usually too weak: it does not allowIntegerto be accepted by existing APIs annotated asint. -
Using
numbers.Integralwould imply nominal inheritance that does not exist unless I explicitly register the type at runtime, and there are various issues with thenumerictower (see eg Avoid the builtin `numbers` module. · Issue #144788 · pytorch/pytorch · GitHub) -
Defining a custom
Protocol(e.g. an “IntLike”) is structurally correct, but does not help with third-party APIs expectingint. -
Similarly, using a union alias (
int | Integer) is explicit but only helps with our own code, not third-part APIs.
Is there an established or recommended way to model “int-like but not an int” types so they interoperate well with static typing? If I accept that Integer will not be accepted by external APIs annotated with int, which of the other options (numbers.integral, custom protocol or custom alias) is the preferred option?
12 posts – 6 participants
]]>
