티스토리 수익 글 보기
we have multiple projects/setups running Django+Wagtail (5.2/7.0) with gunicorn+uvicorn (latest versions, no threading) as ASGI applications with django-channels
. We use pgbouncer and in Django we use CONN_MAX_AGE = 0
. We are running this setups for about 4 years now and we didn’t have any issues with connection limits.
In the last weeks, I assume due to package updates (we’ve upgrade Django from 4.2 to 5.2), the setups started raising FATAL: no more connections allowed (max_client_conn)
(so we’re hitting the relative high limit of pgbouncers client connections), after bots crawled for many unknown URLs concurrently. This is not unusual and never resulted in connection issues in the past.
One of those setups is very small: the instance has 2 CPUs, python3.12, it runs a gunicorn with 2 workers, no threading, 1 AsyncHttpConsumer
without auth or database access (also it was never hit by the crawlers), and the rest is a typical Django/Wagtail site with synchronous middleware and views. pgbouncer is configured with max_client_conn = 100
.
Even this simple and small setup got the no more connections allowed (max_client_conn)
after being crawled.
Now, I’m trying to understand why this is possible.
My understanding of the ASGI-setup is, that the threads created by asgiref
(?) to handle requests are limited by the default thread pool being used (which could be manually sized with ASGI_THREADS
according to the channels-docs).
So in my understanding, if there are 2 CPUs, the default threadpool should have a size of 7 (2 CPUs + 5) and 2 workers/processes should therefore together hold a maximum of 14 connections, since due to CONN_MAX_AGE = 0
the connections are closed after every request.
Since the application is hitting the 100 connections, I probably miss something here, or there is an issue with database connections not being closed.
For me it looks like the request handling is somewhere leaking those connections, or it takes longer to close this connections while other threads are demanding new connections…
We actually wanted to switch to Django’s native connection pooling for performance, but as long as I don’t understand the setup correctly, this switch would probably make it worse, as with native pooling and therefore direct connections to the db, we would have to use smaller connection pools (since the db is shared with other projects), which would get exhausted even faster.
Can you help me?
- Is my understanding of the ASGI-setup wrong? Is this normal behavior and those threads used for requests and therefore the used connections are actually never limited?
- Is there are way to determine the maximum number of used threads?
- Can we limit those threads and therefore the maximum used connections?
In my cases I also work heavily with Celery, and it’s very important that I handle that rollover well. Unfortunately, some of my celery jobs can take a little while to complete (minutes or tens of minutes, rather than seconds), so the opportunity for accidentally causing disruption gets pretty high. Plus, I watch Sentry for problems, and seeing things error out for schema errors, even in celery tasks, really makes me nervous that it might be some important operation that’s being dropped on the floor.
It just depends on your application’s workload, and how tolerant you can be of errors during the deployment. For me, I understand the pattern well enough that I’ll generally implement django-safemigrate
into my deployment, and then do my best to annotate the migration correctly to avoid the errors. I’ve found the incremental burden to be quite manageable once I’ve practiced it.
It’s still a new thing to learn when you’re getting started, and I don’t want to dismiss that complexity as inconsequential. Once you’re to the place that you’d like to automate migrations, you might consider whether it’s worth some time investment to set things up for it to at least be possible to roll things out without errors. It feels like a good junction point.
]]>Now, after finally figuring out how to access the important data in the request in order to manually check those permissions (and I got that approach working) I take another look at the has_object_permission and I see exactly where and how to check the permissions that I need. Okay, I’m going to refactor that so it fits into the framework better.
thanks for your help
]]>When I say “think in terms of templates” it’s maybe more of a vibe than a specification.
In my current main project, we have views organized by access control (public, users, staff and api) so they’re in 4 different files, but the template-y methods for the API views are in there with the view methods.
@api_get
def entry_info(request, entry_id):
entry = RegistryEntry.objects.get(
Exists(Service.public_objects.filter(pk=OuterRef('service_id'))),
pk=entry_id
)
verified_domain = entry.get_verified_domain(entry)
if not verified_domain:
raise ValidationError(f"No verified domain found for {entry.service} ({entry.service.company})")
return entry_template(entry, verified_domain, request)
def entry_template(entry, verified_domain, request):
return {
"trustLevel": entry.trust_level,
"trustStatus": entry.trust_status,
"verifiedDomain": verified_domain,
"trustInfo": _trust_info_partial(entry),
"serviceInfo": _service_info_partial(entry.service, request),
"operatorInfo": _company_partial(entry.service.company, request),
"authConnection": _auth_connection_partial(entry.service, request),
"dataConnection": _api_connection_partial(entry.service),
"entityDataValidity": {
"validFromDT": date_field_to_utc_string(entry.approval_date),
"validUntilDT": date_field_to_utc_string(entry.expiration_date),
},
}
def _trust_info_partial(entry):
partial = {
**value_or_not('servicePrivacyPolicy', entry.service.privacy_policy_url),
**value_or_not('operatorPrivacyPolicy', entry.service.company.privacy_policy_url),
'termsOfService': [ url for url in [
entry.service.privacy_policy_url,
entry.service.company.privacy_policy_url,
entry.service.company.terms_of_service_url
] if url is not None],
'dataProtectionOfficer': _dpo_partial(entry.service.company),
**(entry.trust_info or {})
}
return partial
...
I think most of this is as easily readable as a serializer that has an allow list, and it allows the structure of the API to be designed specifically for JSON. Django models often flatten data when it’s going into SQL, because you might need to filter on a field so you can’t stuff that field deep into structured data. but we don’t necessarily want the same flattening for data being organized into our APIs.
This more extended example shows some of the places that have rough edges. Eg The idea that the ‘servicePrivacyPolicy’ field can be excluded if the entry.service.privacy_policy_url db value is NULL is something that could be better presented.
]]>I’ve just started using it today, and I’m impressed. So far it has been working smoothly and flawlessly.
]]>
]]>
So I wrote django-safemigrate. It’s not the only solution to the problem, but it’s my take at a solution, and I’ve used it pretty successfully at two different companies. It might help you, too!
]]>I’ve been tinkering with some ideas around routing and protocols since this post came out, and I’m happy to share some of the early ideas here specifically on the protocols-draft
branch. I’ve mostly been working on sketching out the interfaces here, while still ensuring that the concepts at least kind of work up to this point. I’ve tried to come up with a way, if not the final way, of incorporating routing, parsing/rendering, and mapping to Python objects into a cohesive API-first approach without being too opinionated just yet. I used Pydantic and DRF Serializers as my examples because I’m most familiar with those, but I think things like cattrs and msgspec could fit in as well.
I think there is still work to incorporate a Django sympathetic layer into this proposal, and that needs some more thought. I agree that is the more complex part of this initiative with the most possibilities. This approach is very function-based so far because that is what I am preferring these days. I think it can hook into CBVs as well, and that is something I’m hoping to explore more.
This is my first go at it, and I’m excited to see what other approaches others might bring to the table.
@lisad in my day-to-day I tend to agree with a lot of the concepts you are proposing around “stable APIs.” I think of it more as “explicit” than “stable,” which you could argue is not really different. For example, in today’s ecosystem, you can still using ModelSerializer
(DRF) or ModelSchema
(Ninja) with an explicit fields
that isn’t set to __all__
to keep your APIs explicit without necessarily redefining each fields type from your model. With large payloads and performance critical APIs, I often just use functions for building TypedDict
instances since then my IDE can still help me, which is similar to your functions concept. The idea of using templates has come across my mind as well, but I prefer having my API mapping defined in the same file as my API function, so having a separate template file is not very appealing to me, personally.
I have also really grown to like the explicit nature of decorators on my APIs, but we’ll have to see if that is a fad for me . My example implementation definitely leans towards the decorator concepts. The API error handling on each of the views felt like it got a bit noisy in my experimenting so far, but I’m still playing around with that one specifically.
The lesser the details, the lesser quality responses you’re going to have.
With that in mind.. you’re probably using some permission_classes
on your view. They’re the ones responsible for this kind of check, if you’re using the CRUD facilities on DRF (i’m assuming that you’re using DRF based on some of the method names). If so, are your `permission_classes’ verifying if the user has permission for that specific project?
I now realize that what’s proposed here is less ambitions that what was previously attempted and I agree that it can still be valuable to contributors to have the details in-lined their pull request contribution if the comments come with an admonition.
Sorry for the jumping the gun here, maybe I’ve just been around for too long but it wouldn’t be the first time I see very well intentioned efforts follow the exact same path towards fixing long standing problems in complete ignorance of previous attempts and I wanted to make sure it wasn’t the cae here as I couldn’t find any references to them in the linked documents.
]]>I’ve never used memcached
though. I’m wondering if there are any other alternatives for python clients?
- Introduction (why should we update the homepage and why do I personally care about this)
- Homepage Goals (what do I think the goals of the homepage should be)
- Competitor Analysis (what are my thoughts when evaluating the homepages of other server side frameworks)
- Homepage Analysis (what are my thoughts about the current Django homepage)
I will stress this is all my opinion and I am not trying to make light of the current work being done, the current design, or anyone else’s effort. I think we all want the same thing at the end of the day – to improve Django, increase adoption, and share how great Django (and the community) is.
I would appreciate any feedback and constructive criticism of the document I wrote up, along with explicit next steps to continue to make progress.
]]>Will give my 2 bits here..
We’re using AWS AppRunner, and have the migrate
command ran before running the server (on the entrypoint script). Due to the green/blue deployment strategy (that I think it’s similar to the ECS deployment strategy), we have a small window of time that HTTP requests goes to the previous version server, and that may cause some failures. For us, it’s not that big of a deal. Most of the times this is not noticeable by end-users.
On our pipeline, we also deploy celery on a EC2 after pushing the new application image to ECR, we also run the migrations before running the celery worker. From what I remember, normally the celery process is the one that executes the migrations (since it happens before the image is picked and started by AppRunner), and for us this is the one more important process to have the “migrations” synced.
However, there’s an exception swallowed on startup under Python 3.13 that has been fixed since July 2024, but there hasn’t been a pylibmc release since 2022.
Granted, the situtation with pymemcache isn’t significantly better (no releases in 3 years either, but at least their CI does target py313, albeit with a version of gevent that is too old to compile in the test requirements).
So I have to wonder out loud: should we stop recommending pylibmc without better maintenance?
]]>I need to have a model like this :
QuestionID
Question – string
QuestionType : one of the input types – TextInput | Single Choice | Multiple Select
ParentQuestionID : 0 if one of the parent questions else parent QuestionID
Another model for the answer to the Single Choice type questions
I need the staff to enter nested questions based on matching pre-selected answers.
I am not sure if this is possible in django-admin. Or is creating a UI from scratch for the staff for these ‘kind’ of questions the only way out ? I chose Django because of django-admin.
]]>The Options 1 and 2 are more in the air — but I’d recommend to everyone: if you’ve got an idea, put up a proof of concept. The more the better here.
]]>-
Having the API guide pages that layout common options. This is probably the most impactful quick win to my mind. I would possibly go so far as to say a first draft could be done quickly with an AI of choice and then refined by humans.
-
I have been pondering what the Python API for APIs in core could look like and I am slowly coming around to the idea that it lives in the view layer with some decorators for FBV’s and additions (or perhaps a new start?) for CBVs, along with work on the URLs layer of routers in the mix. There is a huge amount of choice at the serialization layer so having a common interface to choose your own adventure there would seem key to me and allow third party package to continue to provide optionality and innovation.
The general point for me is getting the Python API right that would allow someone to migrate from DRF to cattrs to pydantic etc without a huge amount of effort.
I’ve also been pushing the proposal to add Content Negotiation to Django’s Request object for a good few years now. This is the missing bit at the request layer that we’d need for feature equivalence to DRF there. This hasn’t been successful yet, but I know @emma wants to drive it forward, directly because of this API story topic, so I’m hopeful that will make ground in the next cycle. This should just be part of core, in my opinion. (It’s foundational in a way that other layers aren’t. DRF’s wrapping of the Request object to add it was always a source of pain.)
The serialisation story is the other leg, so to speak. It’s this bit that I think is too in-flux (i.e. exciting, new, unknown, …) to be straight to Django and we should let the ecosystem continue to explore.
It’s similar to template components: everyone wants them, but it’s a fertile ground of new ideas, and we need to see how that unfolds.
In both these cases, I’m absolutely behind promoting what’s out there, and have been trying to do that. We should be doing that more IMO
The point about perfect vs good is that there are already demonstrations that we can have top level performance by adopting modern approaches. This isn’t something that’s theoretical.
]]>
The “dream” solution would be if our website could directly access local folders, make copies, rename them, and package them in place — but as far as I know, that’s not possible.
Well, why not?
You can access the local filesystem from your Django app: You do not even need to use File
or the File storage API | Django documentation | Django, but you can directly use Python’s built-in File and Directory Access — Python 3.13.7 documentation methods.
For user downloads, you may need to consider:
- protection issues (control who can access whose files),
- performance issues (e. g. if you intend to implement this with FileResponse objects – there are alternatives like the
X-Accel-Redirect
header mechanism) and - possibly other caveats (e. g. handling I/O errors, long running activity such as packaging, large files, …),
but generally this is a proper method for dealing with files.
]]>Absolutely we should be documenting the options folks have. But we need to let the current very exciting developments play out before we pull the trigger on an in core option.
There’s no reason why Django can’t both be as fast as other (Python) frameworks out there and have an ORM to wire story that makes folks jealous once again.
We also should not let perfect be the enemy of the good. It’s time that something happens on this front.
]]>It’s basically just meant to automate renaming and restructuring files for our users. This is why I wanted to propose a new approach — we’re essentially making the user download their own files again, just with some external tweaks (renaming and zipping).
The “dream” solution would be if our website could directly access local folders, make copies, rename them, and package them in place — but as far as I know, that’s not possible.
I realize this current idea isn’t great, and I’d like to change it as quickly as possible before the team starts mass-implementing it across the whole app. I’ve been instructed to look into s3-django GitHub – etianen/django-s3-storage: Django Amazon S3 file storage.. Is there an alternative way to achieve the same result?
]]>I’m wondering if there is any caveat when adding migrate command in deployment.
My company uses aws ecs for django project deployement. We uses gunicorn for running app in production, but in that process migrate command is missing. The reason is our previous DevOps engineer was super conservative about db operation in app deployment. Fair point. Since we cannot directly access to docker container in production, we instead make sql file by running sqlmigrate
locally, access to production db, and then run sql script manually.
Sounds pretty unnecessary, doesn’t it?
So I would love to put migrate
command in deployment script. Then this question hit me. ‘What if one thread still uses old code when db migration is already complete?’ If I have many servers running with gunicorn on aws ecs, wouldn’t such scenario be possible?
I wonder how other fellows doing in production with running migrate command.
Please redirect me if there’s already related discussion.
P.S
Please note that we have development and staging server. We do all the test there, so it’s highly unlikely thatmigrate
command crash.

The current PR only posts a comment of the coverage report and does not fail when there are missing lines of coverage (as the test is limited). In the comment, it says:
Note: Missing lines are warnings only. Some lines may not be covered by SQLite tests as they are database-specific.
Beyond database versions, we may have Python version specific code etc. It certainly isn’t a complete report.
We already have a Jenkins ci coverage report (https://djangoci.com/job/django-coverage/HTML_20Coverage_20Report/) which is ran daily against main.
I believe this adds value despite being incomplete (I think this is also only ran on SQLite). The idea with the GitHub action is to have this information automatically available on PRs.
That being said, I think having this CI job (perhaps including the Jenkins coverage job) documented in our contributing docs with it’s limitations might be wise. Somewhere within our docs around reviewing PRs (e.g. Submitting contributions | Django documentation | Django). Then we have something we can link to with more information if folks are finding the report confusing (or the comment itself can link to it). We may also mention limitations around coverage and that the report says those lines are “covered” doesn’t always mean they are well tested (this should be checked in review).
In short, for PRs which don’t impact the ORM, I feel this would add value in most cases.
So I am +1 on us having this limited coverage report posted on PRs

I’ve been building ShadeDB, a database engine designed to be fast, minimal, and easy to integrate into projects — especially useful for Python web apps where you want:
Quick, Redis-like in-memory operations
Simple CLI-based setup (no heavy configs)
A small footprint that can run on almost any device/server
Flexible data handling for prototypes, microservices, and lightweight APIs
Whether you’re working with Flask, Django, or other Python frameworks, Shadecrypt can act as a rapid datastore for caching, session storage, or even small-scale persistence.
Watch the intro video:
Explore the code:
Join the community:
WhatsApp: WhatsApp Group Invite
Connect with me:
We’re currently making a system to mass rename files saved locally on our users’ local machines.
That sounds like one system and
The current proposed solution by our senior is a real head-scratcher: he expects a feature where a user can upload a file, to which our system writes a copy of that file with proper naming conventions.
That sounds like an entirely different system. If the first statement is true, then no one should upload anything.
And if you have access to the filesystem in the first place (to rename files), then why would you need to upload?
]]>
I’m very curious to understand what a pluggable, Django-sympathetic validation and serialization layer looks like. I’m particularly interested in its use not only in REST APIs, but also across internal, cross-app APIs as well. But it touches so many different parts of the stack that I have difficulty grokking what the shape of it would be. It feels more like the Models part of the ORM than the database backends part of the ORM, if that makes any sense
It absolutely does. And, yes, it’s not just about REST APIs. (For me, the one you didn’t mention is logic bearing Display Objects to pass into templates, but let’s not go off there… )
I’m working on a proof of concept here now. I’ve been pottering on it for a couple of years but the discussion here and @FarhanAliRaza’s recent benchmarking work showing Django with comparable performance to FastAPI if we but only used a modern serialisation option (msgpec in his case) gave me a boost. I finally worked out the API I want on paper yesterday. (So I’ll have something to show in the coming period. I’d have it already but the work project is almost 100% HTML driven, so it’s not been pressing personally.)
——
Boring tech…
Yes. Absolutely. But I think we’re at a point where another set of advances are now visible. I think if we just merged Ninja, just as if we just merged DRF, we’d be skating backwards instead of forwards. We’d do it and be back here moaning that we’re lagging behind immediately.
Absolutely we should be documenting the options folks have. But we need to let the current very exciting developments play out before we pull the trigger on an in core option.
There’s no reason why Django can’t both be as fast as other (Python) frameworks out there and have an ORM to wire story that makes folks jealous once again.
]]>Users can access and edit Report objects in our database.
Each Report belongs to a Project object, and has a foreign key pointing to that Project.
Some users will have project-specific write tokens, while other tokens are broader.
It seems that has_object_permission gets checked on model read and access, but access permissions for objects referenced by foreign keys aren’t checked. As a result, users can access a Report and change its Project ID, moving it into a Project which they don’t have write access to. I’m now realizing this might be a strange setup, with children pointing to their parents.
I’ve tried adding code to the api.views.report.partial_update to raise a PermissionDenied in the above case, but no luck so far. Is this the best place to check those permissions (for a PUT or PATCH) before writing? I just added code to raise a PermissionDenied no matter what, in partial_update, just to make sure I’m in the right place, but that had no effect on a PUT.
Really sorry I can’t include code, but this one’s far too sensitive.
]]>
In terms of Option 1, I think 1c is the route forward. Modern serialisers are what we need. I don’t rule out such being based on Pydantic — although that wouldn’t be my choice — but I think there needs to be a more Django-sympathetic layer in-front of that. [More to say here, but not today…]
I’m very curious to understand what a pluggable, Django-sympathetic validation and serialization layer looks like. I’m particularly interested in its use not only in REST APIs, but also across internal, cross-app APIs as well. But it touches so many different parts of the stack that I have difficulty grokking what the shape of it would be. It feels more like the Models part of the ORM than the database backends part of the ORM, if that makes any sense.
I get that Django is a project that’s been around a long time, and that there is reluctance to place a bet on a winner that might not pan out. But I would like to offer a different perspective: Django’s pick here doesn’t need to be “the winner” in order to secure Django’s place as a compelling web framework. It just has to be good.
I work on a large Django project that started in 2011. I didn’t join until 2012, so the framework choice had already been made at that point. I had come from previous jobs that used Pylons and had really strict requirements about querying the database. At the time, every particular thing in Django leapt out at me as being an inferior option. The templating engine was slow and limited compared to Jinja or Mako. The ORM was primitive and confining compared to SQLAlchemy. Tornado had a better story around serving high levels of traffic. About the only things that stood out to me as first-class about Django was its admin interface and its docs. I also had a criminal under-appreciation of the concept of apps. There is a decent chance that if I had been the one to choose at the time, I would have picked a different framework.
I would have been wrong. Choosing Django was probably one of the best decisions that our project made in those early days, but not because Django’s features became best-in-class across the board. Django represents a coherent, documented, supported, integrated, and easily upgradable set of features that are good enough on their own, and extremely compelling when packaged together. Did I want to use some feature in SQLAlchemy? Sure. Was I willing to give up the Django Admin for it? Not a chance. Do I want to play with django-ninja? Yup. Am I going to advocate that we adopt it without more evidence that it will be strongly maintained in five years? Not really.
FastAPI came out six years ago. Using Pydantic as a way to validate, serialize, and generate API docs now qualifies as boring tech (in a good way). Even if I prefer the approach that cattrs takes in the abstract, it’s a lot less interesting to me if I can’t easily generate a JSON schema from it or take advantage of whatever else is in the greater ecosystem around it.
As it stands today, the Pydantic-based approach of FastAPI and Django Ninja is a huge step up over DRF in terms of developer experience. There’s always going to be something better down the line, but will it be so much better that it’s going to be a deciding factor for developers?
There are folks on this thread that know much more about Django than I do, and have been thinking about this problem longer than I have. I’m giving my two cents of feedback here, but I realize that I’m only seeing a small part of the picture.
Honestly, I’m just looking for a blessed, supported upgrade path towards something with a developer experience that is comparable to FastAPI in terms of validation/serialization/doc-generation. If that gets rolled into Django proper, that’s great. If the Django leadership decides that Django wants to be the Debian of web frameworks and django-ninja will be the DSF-blessed REST API-centric distro, I’d shrug my shoulders, roll with it, and ask what the LTS release cycle for django-ninja will be.
]]>
Running “.tables“ in sqlite shows all the tables created.
Please post the output from all those commands here. Just posting a summary does not help us help you.
(No, you should not change INSTALLED_APPS
.)
INSTALLED_APPS` has a long list of applications, but I just tried it with limiting down to the application having the issue, so
“INSTALLED_APPS = ["xxx"]```
]]>Some part of the code is only covered by tests run on Postgres, MySQL, Oracle or a particular Python version for example so there needs to be a coordinated job that collects all of the .coverage
data artifacts and then combine them otherwise the resulting coverage report will be lacking or improperly reporting that some areas are not covered (e.g. if we only use the SQLite test run and Postgres only changes are introduced).
This is especially difficult because some tests are run on Jenkins (the vast majority) and others on Github.
]]>
VS Code nees the venv folder inside the project folder.
This is not an accurate statement. VS Code will support the venv directory anywhere within the file system – you just need to configure it to identify its location.
]]>To confirm that I’m understanding you correctly, you have done:
- Deleted the
db.sqlite
file (or whatever name you use for the database file). - Run
python manage.py makemigrations
- Verified that you have migration files in your app’s
migrations
directory - Run
python manage.py migrate
How have you verified that the “application tables” have not been created?
If you run python manage.py showmigrations
, does it show your app’s migration files?
What is the content of your INSTALLED_APPS
setting?
My venv was outside the project folder and VS Code nees the venv folder inside the project folder. Now it works.
]]>python manage.py migrate
“and I see steps like “Applying xxx.0001_initial… OK“ displayed. But the application tables are not created. I tried running with “-v3“ and I see no errors. I do see some tables created (auth_group,…), but not application tables. I removing the files from the migrations folder and regenerating they via
“`python manage.py makemigrations
“` which generates files, but the next step to generate the tables still does not work. Note, I am using sqlite, so I have been deleting the database before generating the table. ]]>Attendees: Eli, Saptak, Tim, Thibaud
Actions
GitHub Projects: django accessibility improvements, All Table
Actions review
GitHub Projects: django accessibility improvements, All Table
Agenda
-
PR Review session
- Rahmat and Eli would like Saptak to lead one
- Saptak to find an issue to lead a session
- Eli to send lettuce meeting poll
-
- Post by Adam Hill: Want to work on a homepage site redesign?
- And also Fediverse discussions
- Interest from the community outside the Website WG
- State of discussions in the website WG
- Going over all open issues and PRs, get them merged, clean up repo
- There is UX research done
- People aren’t aware about next steps
- Possible next steps
- Announce our plans
- Figure out where / how we fit Adam’s homepage plans
- Decide what we deprioritise from our roadmap
- Call for designers?
- Accessibility input at some point
- Likely timeline
- 1-2 months of work likely needed before we even get to mockups
- Bring website redesign plans forward but not “right now”
- 2-3 key deliverables for accessibility to unblock imminent homepage work or longer-term website work
- Post by Adam Hill: Want to work on a homepage site redesign?
-
UI/UX/usability team
- Opportunity to coordinate contributors in this space
- Separate or combined with accessibility team?
- Blog posts about UI/UX (“Django forms with Tailwind”)
- Experts on command line interfaces
- Ask Adam Johnson if he knows contributors in this space
- Reach out to Tracy Osborn, previously involved with Django community?
- Volunteer vs. paid work
- Need volunteers to prepare good briefs, review proposals, review any work
- TBC whether the main visual design work would be paid or not
- Need UX work to be done upfront of the visual design
- Reach out to consulting firms to ask for design contacts
- Open Source Design
- Ask for volunteer input right now at the early stage
-
Proposed website redesign milestones:
- Marketing strategy
- (Possibly renewing the vision for the framework a bit)
- (Brand guidelines)
User research
Redesign brief
- (Website content strategy)
- (Website moodboard)
- User journey mapping / information architecture for the site
- (Data on current site usage)
- Design mockups, possibly low-fidelity prototypes
- Visual design
-
Redesign brief first steps with Nicole
However this is another issue with my VS Code intellisense right now as I do not understand why pylance and VS Code intellisense do not work well together as I wrote before.
]]>
