티스토리 수익 글 보기
Distributed Press, a platform for decentralized publishing, embarked on a migration experiment from Kubo to Helia to enhance its API’s performance and developer experience, as part of our IPFS Utility grant-funded work. We chose Helia for its lightweight, JavaScript-native architecture, which promised faster initialization, reduced resource usage, and greater modularity compared to Kubo’s monolithic design. This migration, detailed in PR #101, aimed to streamline content publishing and improve maintainability. However, the process revealed several configuration challenges, from DHT advertisement failures to connectivity issues. This report shares our journey, performance comparisons, and practical guidance to help developers build robust Helia-based applications, avoiding the pitfalls we encountered.
Performance Comparison: Helia vs. Kubo
We measured initialization and end-to-end (E2E) publish operation times to compare Helia and Kubo, focusing on their efficiency in a server environment:
|
Metric |
Kubo |
Helia |
Notes |
|---|---|---|---|
|
Initialization |
10-13s |
150–550ms |
Helia’s lightweight design reduces startup time compared to Kubo’s monolithic setup. |
|
E2E Publish |
900ms–2s |
160–800ms |
Helia’s streamlined DHT interactions speed up content upload and advertisement. |

Helia’s faster initialization stems from its modular architecture, which avoids loading unnecessary components like Kubo’s built-in gateway. The E2E publish performance benefits from optimized Libp2p configurations, though proper tuning was critical to achieving these results.
Helia’s Customizability with Libp2p
Helia’s integration with Libp2p offers developers full control over the networking stack by allowing direct injection of a Libp2p instance, unlike Kubo, which embeds and configures go-libp2p internally through a limited set of exposed JSON settings. As a binary, Kubo abstracts much of the networking configuration behind its own API, gateway, and swarm settings, which can be limiting when trying to customize or optimize behavior. In contrast, Helia, built on Libp2p’s modular stack, allows developers to select transports (e.g., TCP, WebSockets, WebRTC), encryption (e.g., Noise), and peer discovery mechanisms (e.g., bootstrap, mDNS). This modularity enabled us to tailor our node for server-side publishing, enabling WebRTC for NAT traversal and configuring kadDHT for server-mode operation. However, this flexibility requires careful configuration to avoid issues like private IP advertisement or connectivity failures, as we learned during our migration.
Development Guidance
Our migration from Kubo to Helia revealed several pain points that developers can avoid by following these recommendations, drawn from our challenges documented in PR #101.
Start with Libp2p Defaults
Helia’s default Libp2p configurations provide a solid foundation for peer-to-peer networking, reducing setup complexity. Key resources include:
-
Node.js Defaults: Configures TCP, WebSockets, Noise encryption, and Yamux/Mplex stream muxers for server environments.
-
Browser Defaults: Optimizes for WebRTC and WebSockets, ideal for client-side applications.
Key Libp2p/Helia Configurations and Their Purpose:
|
Component |
Purpose |
|---|---|
|
|
Enables TCP transport for reliable peer connections (TCP Docs). |
|
|
Supports WebSocket connections for browser and server compatibility (WebSockets Docs). |
|
|
Facilitates NAT traversal for peers behind firewalls (WebRTC Docs). |
|
|
Enables relaying through other nodes for connectivity (Circuit Relay Docs). |
|
|
Provides encryption for secure peer communication (Noise Docs). |
|
|
Manages multiplexing for multiple streams over a single connection (Yamux Docs). |
|
|
Implements Kademlia DHT for peer and content discovery (KadDHT Docs). |
|
|
Connects to predefined peers for initial network discovery (Bootstrap Docs). |
|
|
Detects if the node is publicly dialable to inform other modules like DHT (AutoNAT Docs). |
|
|
Uses UPnP for automatic port mapping (UPnP Docs). |
|
|
Shares peer identity and updates (Identify Docs). |
|
|
Tests peer connectivity (Ping Docs). |
Recommendation: Start with these defaults and customize only as needed; import { libp2pDefaults } from 'helia'
Use // @ts-check for JavaScript
For developers using JavaScript, adding // @ts-check at the top of your Helia configuration file enables type checking without requiring a full TypeScript setup. This caught several errors in our migration, such as incorrect kadDHT options, saving significant debugging time.
Avoid Private IP Advertisement
One of the challenges was the node advertising private IPs (e.g., 127.0.0.1, 10.x.x.x) in the DHT, leading to “gater disallows connection” errors. This occurred because we were not announcing our public IPs in the announce list, causing the node to default to private addresses shared during the identify process. As clarified in How does js-libp2p deal with private networks and IPs?, all listening addresses are sent during identify, and kadDHT can filter private addresses with removePrivateAddressesMapper if needed (e.g., as implemented in the Amino DHT). However, the root issue was the lack of public IP announcement. Take a look at js-libp2p-amino-dht-bootstrapper for how we configure a public js-libp2p node. Additionally, filtering private IPs isn’t always necessary if public peers can connect via dial-back or circuit relay reservations, though expired reservations (e.g., during IPFS checks) could still cause issues. To address this:
-
Use
removePrivateAddressesMapperinkadDHTto filter private IPs.
Example from our config:
addresses: {
announce: [`/ip4/${publicIP}/tcp/${tcpPort}`, `/ip4/${publicIP}/tcp/${wsPort}/ws`],
}Kubo vs. Helia Binding
Kubo’s default configuration binds its API and gateway to 127.0.0.1 for security, while its swarm uses 0.0.0.0 for peer connections.
API: /ip4/127.0.0.1/tcp/${apiPort}
Helia, lacking a built-in API or gateway, requires explicit binding to 0.0.0.0 for external access. We updated our Ansible configuration (distributed_press_host: "0.0.0.0") to allow external connections, resolving issues like “dial backoff” errors seen in IPFS checks.
Recursive Directory Upload & Pinning
For projects that need to add and persist entire directory trees, Helia’s globSource utility makes it easy to recursively upload folders and then pin the resulting root CID (and all child CIDs) in one step.
import { globSource } from 'helia/unixfs'
import { createHelia } from 'helia'
async function addDirectory(dirPath) {
// Initialize your Helia node (once)
const helia = await createHelia()
// Recursively add all files under dirPath via glob pattern
const rootCid = await helia.addAll(globSource(dirPath, '**/*'))
.then(results => {
// The last result corresponds to the directory itself
const last = Array.from(results).pop()
return last.cid
})
// Pin the directory CID (and all children) in one go
await helia.pins.add(rootCid, { recursive: true })
console.log(`Directory ${dirPath} added and pinned at CID: ${rootCid}`)
return rootCid
}Additional Tips
-
Connectivity Testing with IPFS Check: IPFS Check proved to be an invaluable resource for testing node connectivity and content advertisement. This tool provided real-time insights into peer connections and DHT propagation, significantly streamlining our debugging process and ensuring our Helia node was correctly configured for external access.
-
Logging: Extensive logging (e.g.,
ctx?.logger.info) helped diagnose issues like failed DHT provides or IPNS resolutions. -
Firewall Rules: Ensure all the ports are open in your firewall configuration, as we updated in our Ansible.
-
Test Timeout: Increase test framework timeouts to accommodate Helia’s network operations, especially in CI environments.
Conclusion
Migrating to Helia improved our API’s performance and developer experience, but required careful configuration to avoid pitfalls like private IP advertisement and timeout issues. By starting with Libp2p defaults, enabling type checking, and tuning DHT and connectivity settings, developers can build robust Helia-based applications. Our experience, documented in PR #101, provides a roadmap for others to follow, ensuring efficient and reliable IPFS integration.
]]>The challenges of using banks or third-party platforms for fundraising are increasing.
Important causes are often misunderstood by platform policies—just like the Ukrainian NGO Come Back Alive, which was banned from Patreon in 2022. In response, they transitioned to cryptocurrency donations and raised $400,000 in just 24 hours.
Similarly, banking regulations often lack the flexibility to distinguish between legitimate charitable activity and suspicious behavior: a UK survey found that over 1 in 20 charities faced account freezes in 2023, leaving them unable to access funds at critical moments.
These cases highlight how cryptocurrencies provide a resilient alternative for organizations facing financial censorship, ensuring they can continue operations and receive support without interference.
🔥 That’s why we built a censorship-resistant donation page.
With crypto donations across multiple networks, you no longer have to rely solely on fragile banking systems or restrictive platforms.
Hosted on Distributed Press, your donation page stays online forever—unstoppable, uncensorable, and free from financial gatekeeping.
🛠️ Open-source & easy to set up—start accepting unstoppable donations today.
🔗 Fork and start here
👀 Read our documentation here
Sources:
https://charities.network/articles/debanking-and-frozen-funds-charities-ongoing-struggles/
https://givewp.com/ukraine-crypto-donations-how-cryptocurrency-impacts-global-giving/
Distributed Press is on a mission to contribute to a web that is more private, reliable, secure and open. With the Filecoin Foundation for the Decentralized Web-supported Resilience Grant, we sought to spotlight the vulnerability of centralized digital infrastructures and provide sustainable alternatives through peer-to-peer protocols. Our contemporary internet is more fragile than it seems, with entire digital collections disappearing due to server failures, targeted attacks or lack of maintenance.
One such case was Colectivo TLGB Bolivia, a network defending the rights of LGTBQ+ communities across the country. Their online platform had become a target for cyberattacks, misinformation campaigns, and digital vandalism. Their site even suffered defacement—where attackers altered its appearance to spread hate—putting years of advocacy, reputation and crucial resources at risk.
SOLUTION
The Resilience Grant invited organizations that have lost access to their digital assets to apply for a site recovery. The selected organizations were offered the chance to have their archives restored and converted into static sites, distributed via HTTPS, IPFS, and Hypercore—eliminating dependence on a single centralized server and ensuring long-term accessibility.
RESULT
Colectivo TLGB Bolivia was one of the selected cases, meeting both the technical and organizational criteria of the initiative. By using decentralized web technologies and static-site architecture, we helped recover their website, shielding it from future cyberattacks, ensuring its long-term availability, and preserving its extensive archive of LGBTQ+ activism, legal resources, and community-driven documentation.
FULL STORY
The internet is full of valuable information, but it’s also fragile. Websites disappear all the time due to technical failures, lack of maintenance, and attacks. This growing problem is referred to as “digital decay”. A recent study by Pew Research Center found that 54% of Wikipedia pages contain at least one link in their “References” that points to a page that no longer exists. Digital decay means that entire collections of knowledge are being lost.
Colectivo TLGB Bolivia, a network that fights for LGBTQ+ rights across the country, knows this struggle firsthand. Their website, which provided important legal information and advocacy materials, was hit by misinformation campaigns and cyberattacks. At one point, attackers took over the site and changed its content to spread hate, making it unsafe for users and threatening years of hard work.
This grassroots organization has operated across all nine departmental capitals and several rural municipalities since 2000, ensuring that individuals of diverse sexual orientations and gender identities can fully exercise their rights. In addition to legislative advocacy, Colectivo TLGB Bolivia places a strong emphasis on health and well-being and has been instrumental in addressing mental health support and discrimination in healthcare settings. They’ve built alliances with various public and private institutions to provide comprehensive support to everyone in need. We were eager to support this organization’s digital resilience.
What we did to help
Distributed Press launched the Resilience Grant to help organizations facing such challenges. We wanted to show how websites that rely solely on centralized hosting are vulnerable to decay and to offer alternatives using peer-to-peer technologies.
We invited groups that have lost access to their digital materials to apply. After reviewing applications, we selected two organizations, including Colectivo TLGB Bolivia. Our goal was to rebuild their sites as a static site and host it on regular web servers as well as decentralized networks such as IPFS and Hypercore. This would make the sites easier to access in the long run, as having multiple pathways to access content increases resilience by preventing single points of failure.
How the recovery worked
Colectivo TLGB Bolivia had backups of their content, database, and source code, along with installation instructions. However, the site was built with PHP 5.3, a version that has been unsupported for over a decade. This meant that no modern tools could run the site without making significant and invasive modifications to the code. To work around this, we searched for a way to run PHP 5 and keep the site functional. We used Alpine 3.8 containers, which supports at least PHP 5.6, allowing us to get the site running long enough to properly archive it.
Having access to the source code and a backup during the recovery process allowed us to make necessary adaptations, such as ensuring all URLs were consistent, which facilitated the archiving process.
We then converted the website into a static version, ideal for content susceptible to attacks due to their inherent security advantages, as they present a reduced attack surface. Without server-side scripts or database interactions, static sites are less susceptible to common threats like SQL injection or cross-site scripting (XSS) attacks. Finally, we distributed the site across multiple systems and nodes, including standard HTTPS, IPFS, and Hypercore.
The result
Colectivo TLGB Bolivia’s site is now available in several ways:
-
Through a regular web address (HTTPS)
-
Access Web3 systems using a gateway in any web browser
https://colectivotlgbbolivia-org-bo.hyper.hypha.coop/
https://colectivotlgbbolivia-org-bo.ipns.ipfs.hypha.coop/ -
On the Hyper network using Agregore Browser
hyper://colectivotlgbbolivia.org.bo/ -
On the IPFS network using Agregore Browser
ipfs://colectivotlgbbolivia.org.bo/
For activists and community groups, keeping their digital spaces online is just as important as organizing in person. Websites are often targets of censorship and attacks, especially for groups that challenge discrimination and defend human rights. By using decentralized technology, we’re helping to make sure these voices aren’t erased.
This project was a singular but important step in making the internet more reliable for communities that need it the most. We are committed to helping more organizations take action to secure their digital work for the future.
]]>The internet, an essential space for preserving and sharing knowledge, is becoming increasingly fragile. Studies estimate that over 30% of links on the web become inaccessible within a decade, with countless valuable resources disappearing due to server failures, outdated platforms, and lack of maintenance.
One such case was Desarquivo: an extensive digital archive documenting feminist, anarchist, and anti-racist activism in Brazil since 2011. Developed on an outdated Drupal platform and hosted on a shared server that had reached full capacity, Desarquivo had become unstable, putting its valuable collection of more than 1,500 documents at risk.
SOLUTION
With the Filecoin Foundation for the Decentralized Web-supported Resilience Grant, Distributed Press aims to raise awareness about the fragility of digital resources in centralized infrastructures, and present alternatives on decentralized, resilient web foundations such as peer-to-peer protocols.
We invited organizations with lost assets to apply to this opportunity and analyzed the submissions using technical criteria. The two selected organizations were offered the chance to have their lost content recovered and moved onto a static site, and then published using HTTPS, IPFS and Hypercore protocols. This publishing approach eliminates reliance on a single centralized server for them, ensuring long-term accessibility and stability.
RESULT
Imotirõ, the collective behind Desarquivo, applied and was selected for the Resilience Grant as they complied with our technical and organizational criteria. By leveraging decentralized technologies and the use of static-sites, we recovered and distributed the website Desarquivo, safeguarding it from digital obsolescence, and preserving its historical and cultural records for future generations. The new, more resilient infrastructure enables the archive to thrive without being constrained by outdated software or limited server capacity.

Source: https://desarquivo.org/node/1467/
FULL STORY
The challenge of digital resilience
The internet is an essential space for preserving and sharing knowledge, but it is also fragile. Link rot and single points of failure threaten the longevity of digital archives. A study by Pew Research Center found that this happens even to government webpages, with 20% of them containing at least one broken link.
To address this issue, Distributed Press launched the Resilience Grant, an initiative designed to empower organizations working with crucial digital archives by transitioning them to decentralized, resilient web infrastructures. Within the Resilience Grant, we intended to raise awareness about the fragility of digital resources in centralized infrastructures, and present alternatives to be found on decentralized, resilient web foundations.
The selected organization: Imotirõ and Desarquivo project
Imotirõ is a cultural association of researchers, educators, activists, and artists working on interdisciplinary knowledge production. They applied to the Resilient Grant with the hope of recovering one of their most significant projects, Desarquivo. Desarquivo is an extensive digital archive documenting feminist, anarchist, and anti-racist activism in Brazil since 2011. It held more than 1,500 documents, including texts, artistic practices, and self-organized political movements, dating back to the late 1990s. However, due to technical and financial constraints, the project had become increasingly unstable and was at risk of disappearing.
When receiving their submission to the Resilience Grant, we decided that the case complied with our technical and organizational selection criteria. Technically, they had sufficient backup of their lost content. Organizationally, there’s alignment since they are a purpose-driven organization that creates positive social impact for collectives facing systemic inequity and prejudice, while adhering to open knowledge ethics.
At risk of dissappearing
Desarquivo was initially developed on Drupal, a powerful but resource-intensive content management system. Over time, the platform became outdated, making it difficult to maintain: the version of Drupal used in Desarquivo stopped being maintained by the development community in 2016. The instability of the Drupal-based site and the diminishing support for its version in free software communities posed a significant risk to the archive’s accessibility. Additionally, the archive was hosted on Njira, a shared server that had reached full capacity, preventing further updates and maintenance.
The recovery
When tasked with recovering Desarquivo’s website, we had full access to its code, archives, and database, which was crucial for restoring the site to its original state. Additionally, the team provided Docker configurations, allowing us to replicate the exact environment locally for recovery.
The initial restoration yielded an 8GB site with over 2,000 files, revealing significant redundancy in its dataset. We identified duplicate records and inefficient pagination inflating the archive’s size. After consulting with Desarquivo, we proceeded to optimize pagination, reducing the total footprint from 8GB to just 2GB without compromising content integrity.
Further optimizations included:
-
Deprecation cleanup: removing an obsolete login functionality.
-
Search integration: replacing the internal search with DuckDuckGo, eliminating the need to maintain a dedicated search index.
Next, we leveraged our own tools to detect link rot across the site’s hypermedia. We found that nearly 200 external links were misconfigured due to faulty validation in Drupal, preventing them from resolving correctly. After fixing these, all outbound links now direct users to their intended destinations.
Finally, we converted the Drupal-based content into a static site, significantly reducing server load, enhancing stability, security and longevity. The recovered website was published via regular (HTTPS) and distributed web: IPFS and Hypercore protocols rely on peer-to-peer networks. By decentralizing storage, Desarquivo no longer relies on a single server, eliminating a critical point of failure and securing the archive’s availability for years to come. Curious about the role of distributed protocols? Read more in the Appendix.
And it’s back! Readers now have multiple options to access Desarquivo:
-
Visit the URL
https://desarquivo.org/ -
Access Web3 systems using a gateway in any web browser
https://desarquivo-org.hyper.hypha.coop/
https://desarquivo-org.ipns.ipfs.hypha.coop/ -
On the Hyper network using Agregore Browser
hyper://desarquivo.org -
On the IPFS network using Agregore Browser
ipfs://desarquivo.org
A step toward a more resilient web
In these times of unprecedented attacks on human rights defenders, diversity and equity advocates, we need to commit to the longevity and safety of critical historical and cultural records. As social threats add to the already too common technical constraints, digital resilience becomes increasingly urgent. We feel this initiative can set a precedent for other collectives, researchers, and archivists looking to future-proof their work.
Learn more about Desarquivo and Imotirõ’s mission, and how Distributed Press continues to foster a more resilient web.
Appendix: Why go distributed?
We believe that distributed web (dWeb) and the use of decentralized technologies like the InterPlanetary File System (IPFS) and Hypercore offer robust solutions to tackle linkrot and degradation of digital assets by ensuring content remains both accessible and verifiable.
IPFS and content addressing
IPFS addresses link rot by utilizing content-addressable storage, where each file is identified by a unique cryptographic hash, known as a Content Identifier (CID). This system ensures that content can be retrieved from any node storing the data, independent of its original location, thereby eliminating single points of failure. (Source: https://starlinglab.org)
Hypercore and versioned data
Hypercore, a foundational component of the Dat Protocol, offers a distributed, append-only log structure that maintains a complete history of data changes. This versioning capability allows users to access specific data versions, mitigating issues of content drift and ensuring the integrity of digital archives. (Source: https://www.datprotocol.com/)
]]>We started with their website “Keep Khan“, an initiative to support FTC Chair Lina Khan’s efforts to protect the American public from monopolistic practices and corporate exploitation.
After testing the distribution with this campaign site, Fight for the Future wrote their own script to automate the publication of all their sites with Distributed Press. Up to February 2025, more than 50 initiatives are grounded in distributed foundations, including highly relevant and current campaigns like No FCC speech police – to protect the Federal Communications Commission from becoming a political weapon.
Multiple options to access a campaign:
Since copies of each website were put on many different Peer-to-Peer data sharing systems, there are several options to access them.
For example, to access Keep Khan readers can:
-
Visit the URL
https://www.keepkhan.com/ -
Access Web3 systems using a gateway in any web browser
https://www-keepkhan-com.hyper.hypha.coop/
https://www-keepkhan-com.ipns.ipfs.hypha.coop/ -
On the Hyper network using Agregore Browser
hyper://keepkhan.com -
On the IPFS network using Agregore Browser
ipfs://keepkhan.com
Check all the amazing campaigns you can participate in through regular URLs or explore accesing them via distributed protocols.
A legacy of advocacy: Fight for the Future
Fight for the Future has been at the forefront of the digital rights movement, through its creative and impactful campaigns to tackle complex issues. As an organization focused on technology and the rules governing it, they’ve led pivotal initiatives, including:
Net neutrality battles: Organizing global action in 2017, they mobilized millions to defend net neutrality, rallying tech giants like Reddit, Netflix, and Mozilla to join the fight.
Privacy protections: The Don’t Break Our Phones campaign challenged the FBI’s demand for Apple to compromise user security—a fight they won, setting a critical precedent for digital privacy.
Social justice tech: Championed causes like banning facial recognition and opposing Amazon’s surveillance technologies, ensuring technology serves freedom and equality rather than oppression.
Fight for the Future’s initiatives champion the use of technology as a tool for liberation and public interest, rather than oppression or economic exploitation of the audiences. By taking bold stances against powerful interests and exposing practices that undermine basic rights and democratic values, they face significant risks of attacks or censorship.

The Distributed Press Mission
Distributed Press was born to build a more open, equitable, and censorship-resistant internet for authors and publishers everywhere, no matter their tech literacy or skills. We help authors and organizations ensure their work remains under their control, accessible to their audiences, and resilient to attacks and technical challenges. Our tools fight link rot, bypass censorship and foster a more decentralized digital ecosystem, by leveraging distributed web protocols in order to avoid single points of failure.
In today’s complex digital sphere, we can’t be naive about the animosity towards initiatives that defend basic human rights and democratic values. Organizations that work with environmental defense, LGBTQIA+ rights, resistance against corporate exploitation and access to public information are in permanent risk of digital attacks. But sociopolitical persecution is not the only threat for them: sometimes technical hurdles, as simple as not being able to pay continuously for hosting, are just as detrimental to keep their content online. Digital content is fragile, and this means that a crucial part of important stories and knowledge are getting lost in time and disappearing.
We want all this important initiatives and their content to be protected and to have a containment network in the DWeb. Distributed and peer-to-peer protocols are the alternative to our traditional web rotting along time.
You can read about another case we participated in, this time collaborating with Starling Lab to recover and preserve with integrity two photojournalism projects that had been taken down
If you also want to create distributed versions of your sites, you can do it for free here
]]>If you are an NGO, non-profit, or an individual activist, this grant is designed to recover a website you have lost and fortify it with the power of the Distributed Web.
The deadline for submissions has been extended by one more week, so there’s still time to apply. If you think you or someone you know might benefit, don’t hesitate to apply through the applications form.
Share this opportunity with others who might need it, and submit your application until January 20th!

Lost your website? We can (maybe) help!
We’re offering a Resilience Grant for NGOs, non-profits, and individuals who protect human rights and empower people, especially those facing systemic inequity and prejudice. We’ll recover your site and make it more resilient with the DWeb—100% free.
Apply now through this form! We’ll be choosing two organizations to work with in early January 2025.

FAQs
How will you recover my site?
We can restore your site if it meets certain technical conditions. Share your details in the registration form so we can evaluate whether we can help you.
How do you make my website more resilient?
At Distributed Press, we specialize in creating decentralized versions of websites using protocols like IPFS and Hypercore.
Having collaborative copies of your site hosted outside HTTP helps avoid fragility and link rot, reduces the risk of losing information, makes your data more resistant to censorship and deletion, and ensures it remains accessible even if/when hosting servers go down. As part of this grant, we’ll not only recover your site but also host it on the Distributed Web.
What are the conditions?
We will recover 1 site for 2 organizations or individuals during February 2025. Selection criteria include technical feasibility of the restoration and your consent to share the story as a case study afterward. Due to the tight timeline of this project, you’ll also need to agree to a response time of around 48 hours — so we can move forward together!
]]>
We will recover 1 site for 2 organizations/individuals during February 2025.
Selection criteria include technical feasibility of the restoration and your consent to share the story as a case study afterward.
Due to the tight timeline of this project, you’ll also need to agree to a response time of around 48 hours — so we can move forward together!
What is a site recovered and preserved on the dWeb?
We consider a website to be down when it is no longer accessible through its address, or it has not been properly maintained and has been infected, censored, or blocked by the provider, it was taken down by the provider for lack of payment, the provider does not exist anymore… there are many reasons why we can lose our website.
When we recover and preserve a site, we:
-
Use a backup copy or free public archiving service such as the Wayback Machine to generate a new version of the site, called a static site.
-
We test the new static site so that it behaves like the lost one.
-
We host copies of the new site on servers at Sutty and Distributed Press
-
The site will be accessible via its historical address, for example https://lostwebsite.org and distributed addresses such as ipns://lostwebsite.org and hyper://lostwebsite.org.
What we will not be able to do is:
-
We can’t recover the site or some of its pages if there are no full backup copies or if it was developed with Flash or other closed technologies. We can only recover sites developed with open technologies such as HTML and CSS (fortunately these are the majority of sites!).
-
We can’t provide access to the original content manager. The recovered site will be static, i.e. functional for visitors, but not updatable. We can quote separately for the conversion to the Sutty content manager so that it can be updated.
-
We can’t make changes and optimizations in the design or content of the site. We can quote this separately.
-
We can’t recover the address (the “domain name”) if access was lost and cannot be recovered. We can offer technical advice, but recovery is at the organization’s expense.
If this is not possible, the recovered website could use a name in the style of
recoveredsite.sutty.nl. -
We can’t bear the cost of recovery and annual renewal of the domain name. We can make technical recommendations.
Conditions that make it difficult or impossible to recover a site
No backup copies
If the site does not have backup copies or was not archived by Internet archiving projects such as the Wayback Machine, it is impossible to recover it.
Your content was private
If the site content (or part of it) was only accessible with a user account and password and there are no backups, it cannot be recovered, because it could not be publicly archived.
No access to the domain name
If the domain name was registered by a person outside the organization, the only way to recover it is by transferring it to a current account of the organization.
If we do not have access to the domain name, the site may be recovered, but it will have to be hosted under another domain name.
Keeping domain names is important because search engines and site visitors remember that address and there is no technical way to report the change. Our general recommendation is that once a domain name is registered, always keep it in the organization’s name and organization and renew it even if it is no longer in use.
The domain name was taken by a domain landowner (domain “squatter”)
If we enter the site and find an advertisement saying that it can be bought again, it is very possible that the domain is registered by a domain landowner.
In that case, they will want to charge much more than the normal registration fee, based on the perceived value of the domain name, although it is always possible to negotiate a lower price.
]]>