<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-GB">
	<id>https://docs.ada.edu.au/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=MMitrev</id>
	<title>ADA Public Wiki - User contributions [en-gb]</title>
	<link rel="self" type="application/atom+xml" href="https://docs.ada.edu.au/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=MMitrev"/>
	<link rel="alternate" type="text/html" href="https://docs.ada.edu.au/index.php/Special:Contributions/MMitrev"/>
	<updated>2026-05-14T11:05:42Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.45.3</generator>
	<entry>
		<id>https://docs.ada.edu.au/index.php?title=Main_Page&amp;diff=1364</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.ada.edu.au/index.php?title=Main_Page&amp;diff=1364"/>
		<updated>2025-10-22T03:53:14Z</updated>

		<summary type="html">&lt;p&gt;MMitrev: /* ADA Archival Workflow Diagram */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://ada.edu.au/ Australian Data Archive (ADA)] provides a national service for the collection and preservation of digital research data. ADA disseminates this data for secondary analysis by academic researchers and other users.&lt;br /&gt;
&lt;br /&gt;
= Deposit Process Overview =&lt;br /&gt;
If you are interested in depositing data with the Australian Data Archive, please write an email to ada@ada.edu.au outlining the topic of your research and the nature of the data, e.g. sensitivity, data type, and quantity of files and size. The ADA will assess each deposit request against a set of criteria, see [[Deposit Appraisal &amp;amp; Collection Policy]]. Once the data has been accepted, the ADA will create a deposit shell on a secure system for the depositor to upload the data, the documentation files and enter metadata (see [[Quick Deposit Guide]] for instructions on what to upload). As part of the curation process, the ADA will view the deposit and make suggestions for changes if necessary  (see [[Workflows]] for details). When the curation is complete, the depositor has the opportunity to preview the final version of the deposit on a designated review page. Once the depositor approves the review, the deposit is published on the [https://dataverse.ada.edu.au/|dataverse: ADA dataverse] and will receive a permanent DOI. &lt;br /&gt;
  &lt;br /&gt;
[[File:Deposit_graph_4.png|1200px|center]]&lt;br /&gt;
&lt;br /&gt;
==Archival Steps explained once data has been approved for deposit==&lt;br /&gt;
&lt;br /&gt;
===Deposit Shell=== &lt;br /&gt;
* ADA creates a deposit dataset shell on ADA&#039;s DEPOSIT Dataverse whete the depositor/data custodian uploads data, and is able to enter study and file level metadata into the metadata fields (e.g. Title, Author...).&lt;br /&gt;
﻿* As part of this process the archivist and depositor(s) discuss the ADA Data License agreement.&lt;br /&gt;
* Once the depositor confirms they have completed the file uploads and metadata they inform the archivist that their deposit is ready for ADA&#039;s archival processing.  This entails the deposit internally being ingested into the archival and storage management system.&lt;br /&gt;
&lt;br /&gt;
===Archival Data Curation Process===&lt;br /&gt;
* Archivists check the data for confidentiality and completeness.  &lt;br /&gt;
* An archive Processing Report is sent to the depositor/data custodian to respond to any recommended changes.  The archivist then makes any amendments and uploads the dissemination (publication) version of the data to ADA&#039;s test Dataverse.&lt;br /&gt;
&lt;br /&gt;
===Review of data &amp;amp; metadata===  &lt;br /&gt;
﻿* The archivist provides the ﻿depositor/data custodian a link to the test Dataverse to verify that the dataset metadata and data are correct.&lt;br /&gt;
&lt;br /&gt;
===Publication===&lt;br /&gt;
* Once approval of the test version has been received from the ﻿depositor/data custodian, the data and metadata are migrated to the [https://dataverse.ada.edu.au/ production Dataverse], and the dataset is published for user access, based on the ADA Data License agreement﻿﻿﻿﻿.&lt;br /&gt;
&lt;br /&gt;
===ADA Data License Agreement===&lt;br /&gt;
* The archivist will provide the ADA Data License agreement and confirm suitable options with the data custodian throughout the deposit and curation process.&lt;br /&gt;
&lt;br /&gt;
== To get started, go to [https://docs.ada.edu.au/index.php/Quick_Deposit_Guide Quick Deposit Guide]==&lt;br /&gt;
&lt;br /&gt;
 For more detailed information on all the steps in the process see [https://docs.ada.edu.au/index.php/Workflows Workflows]&lt;br /&gt;
&lt;br /&gt;
 ADA website, please go to https://ada.edu.au/&lt;br /&gt;
&lt;br /&gt;
= ADA Archival Workflow Diagram =&lt;br /&gt;
ADA bases its archival workflow on the Open Archival Information System (OAIS) Reference Model (2012).  &lt;br /&gt;
&lt;br /&gt;
 Diagram:[https://docs.ada.edu.au/images/e/eb/CTS_ADA-NCI_RDS-Storage_V8_2025_10_21_wiki.png ADA Archival Workflow based on OAIS Reference Model].&lt;br /&gt;
&lt;br /&gt;
= Australian Data Archive Overview =&lt;br /&gt;
&lt;br /&gt;
*[[Background Information and Context]]&lt;br /&gt;
&lt;br /&gt;
Organisational Infrastructure&lt;br /&gt;
&lt;br /&gt;
*[[Mission &amp;amp; Scope]]&lt;br /&gt;
*[[Rights Management]]&lt;br /&gt;
*[[Continuity of Service]]&lt;br /&gt;
*[[Legal &amp;amp; Ethical]]&lt;br /&gt;
*[[Governance &amp;amp; Resources]]&lt;br /&gt;
*[[Expertise &amp;amp; Guidance]]&lt;br /&gt;
&lt;br /&gt;
Digital Object Management&lt;br /&gt;
&lt;br /&gt;
*[[Provenance and authenticity]]&lt;br /&gt;
*[[Deposit Appraisal &amp;amp; Collection Policy]]&lt;br /&gt;
*[[Preservation plan]]&lt;br /&gt;
*[[Quality Assurance]]&lt;br /&gt;
*[[Workflows]]&lt;br /&gt;
*[[Discovery and Identification]]&lt;br /&gt;
*[[Reuse]]&lt;br /&gt;
*[[Setting Access Conditions]]&lt;br /&gt;
&lt;br /&gt;
Information Technology &amp;amp; Security&lt;br /&gt;
&lt;br /&gt;
*[[Storage &amp;amp; Integrity]]&lt;br /&gt;
*[[Technical Infrastructure]]&lt;br /&gt;
*[[Security]]&lt;br /&gt;
&lt;br /&gt;
= Australian Data Archive Projects =&lt;br /&gt;
&lt;br /&gt;
See the [[ADA Projects]] page for outlines in detail ADA&#039;s involvement in multiple projects etc&lt;br /&gt;
&lt;br /&gt;
= Superseded page =&lt;br /&gt;
* [[ADA Self-Deposit - To Documentation Guides]]&lt;br /&gt;
* [[1. ADA Collection Policy Criteria Assessment]]&lt;br /&gt;
* [[2. Deposit Preparation]]&lt;/div&gt;</summary>
		<author><name>MMitrev</name></author>
	</entry>
	<entry>
		<id>https://docs.ada.edu.au/index.php?title=File:CTS_ADA-NCI_RDS-Storage_V8_2025_10_21_wiki.png&amp;diff=1363</id>
		<title>File:CTS ADA-NCI RDS-Storage V8 2025 10 21 wiki.png</title>
		<link rel="alternate" type="text/html" href="https://docs.ada.edu.au/index.php?title=File:CTS_ADA-NCI_RDS-Storage_V8_2025_10_21_wiki.png&amp;diff=1363"/>
		<updated>2025-10-22T03:50:10Z</updated>

		<summary type="html">&lt;p&gt;MMitrev: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>MMitrev</name></author>
	</entry>
	<entry>
		<id>https://docs.ada.edu.au/index.php?title=Main_Page&amp;diff=1359</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.ada.edu.au/index.php?title=Main_Page&amp;diff=1359"/>
		<updated>2025-10-21T04:56:02Z</updated>

		<summary type="html">&lt;p&gt;MMitrev: /* ADA Archival Workflow Diagram */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://ada.edu.au/ Australian Data Archive (ADA)] provides a national service for the collection and preservation of digital research data. ADA disseminates this data for secondary analysis by academic researchers and other users.&lt;br /&gt;
&lt;br /&gt;
= Deposit Process Overview =&lt;br /&gt;
If you are interested in depositing data with the Australian Data Archive, please write an email to ada@ada.edu.au outlining the topic of your research and the nature of the data, e.g. sensitivity, data type, and quantity of files and size. The ADA will assess each deposit request against a set of criteria, see [[Deposit Appraisal &amp;amp; Collection Policy]]. Once the data has been accepted, the ADA will create a deposit shell on a secure system for the depositor to upload the data, the documentation files and enter metadata (see [[Quick Deposit Guide]] for instructions on what to upload). As part of the curation process, the ADA will view the deposit and make suggestions for changes if necessary  (see [[Workflows]] for details). When the curation is complete, the depositor has the opportunity to preview the final version of the deposit on a designated review page. Once the depositor approves the review, the deposit is published on the [https://dataverse.ada.edu.au/|dataverse: ADA dataverse] and will receive a permanent DOI. &lt;br /&gt;
  &lt;br /&gt;
[[File:Deposit_graph_4.png|1200px|center]]&lt;br /&gt;
&lt;br /&gt;
==Archival Steps explained once data has been approved for deposit==&lt;br /&gt;
&lt;br /&gt;
===Deposit Shell=== &lt;br /&gt;
* ADA creates a deposit dataset shell on ADA&#039;s DEPOSIT Dataverse whete the depositor/data custodian uploads data, and is able to enter study and file level metadata into the metadata fields (e.g. Title, Author...).&lt;br /&gt;
﻿* As part of this process the archivist and depositor(s) discuss the ADA Data License agreement.&lt;br /&gt;
* Once the depositor confirms they have completed the file uploads and metadata they inform the archivist that their deposit is ready for ADA&#039;s archival processing.  This entails the deposit internally being ingested into the archival and storage management system.&lt;br /&gt;
&lt;br /&gt;
===Archival Data Curation Process===&lt;br /&gt;
* Archivists check the data for confidentiality and completeness.  &lt;br /&gt;
* An archive Processing Report is sent to the depositor/data custodian to respond to any recommended changes.  The archivist then makes any amendments and uploads the dissemination (publication) version of the data to ADA&#039;s test Dataverse.&lt;br /&gt;
&lt;br /&gt;
===Review of data &amp;amp; metadata===  &lt;br /&gt;
﻿* The archivist provides the ﻿depositor/data custodian a link to the test Dataverse to verify that the dataset metadata and data are correct.&lt;br /&gt;
&lt;br /&gt;
===Publication===&lt;br /&gt;
* Once approval of the test version has been received from the ﻿depositor/data custodian, the data and metadata are migrated to the [https://dataverse.ada.edu.au/ production Dataverse], and the dataset is published for user access, based on the ADA Data License agreement﻿﻿﻿﻿.&lt;br /&gt;
&lt;br /&gt;
===ADA Data License Agreement===&lt;br /&gt;
* The archivist will provide the ADA Data License agreement and confirm suitable options with the data custodian throughout the deposit and curation process.&lt;br /&gt;
&lt;br /&gt;
== To get started, go to [https://docs.ada.edu.au/index.php/Quick_Deposit_Guide Quick Deposit Guide]==&lt;br /&gt;
&lt;br /&gt;
 For more detailed information on all the steps in the process see [https://docs.ada.edu.au/index.php/Workflows Workflows]&lt;br /&gt;
&lt;br /&gt;
 ADA website, please go to https://ada.edu.au/&lt;br /&gt;
&lt;br /&gt;
= ADA Archival Workflow Diagram =&lt;br /&gt;
ADA bases its archival workflow on the Open Archival Information System (OAIS) Reference Model (2012).  &lt;br /&gt;
&lt;br /&gt;
 Diagram:[https://docs.ada.edu.au/images/8/84/CTS_ADA-NCI_RDS-Storage_V8_2025_10_21_wiki.jpeg ADA Archival Workflow based on OAIS Reference Model].&lt;br /&gt;
&lt;br /&gt;
= Australian Data Archive Overview =&lt;br /&gt;
&lt;br /&gt;
*[[Background Information and Context]]&lt;br /&gt;
&lt;br /&gt;
Organisational Infrastructure&lt;br /&gt;
&lt;br /&gt;
*[[Mission &amp;amp; Scope]]&lt;br /&gt;
*[[Rights Management]]&lt;br /&gt;
*[[Continuity of Service]]&lt;br /&gt;
*[[Legal &amp;amp; Ethical]]&lt;br /&gt;
*[[Governance &amp;amp; Resources]]&lt;br /&gt;
*[[Expertise &amp;amp; Guidance]]&lt;br /&gt;
&lt;br /&gt;
Digital Object Management&lt;br /&gt;
&lt;br /&gt;
*[[Provenance and authenticity]]&lt;br /&gt;
*[[Deposit Appraisal &amp;amp; Collection Policy]]&lt;br /&gt;
*[[Preservation plan]]&lt;br /&gt;
*[[Quality Assurance]]&lt;br /&gt;
*[[Workflows]]&lt;br /&gt;
*[[Discovery and Identification]]&lt;br /&gt;
*[[Reuse]]&lt;br /&gt;
*[[Setting Access Conditions]]&lt;br /&gt;
&lt;br /&gt;
Information Technology &amp;amp; Security&lt;br /&gt;
&lt;br /&gt;
*[[Storage &amp;amp; Integrity]]&lt;br /&gt;
*[[Technical Infrastructure]]&lt;br /&gt;
*[[Security]]&lt;br /&gt;
&lt;br /&gt;
= Australian Data Archive Projects =&lt;br /&gt;
&lt;br /&gt;
See the [[ADA Projects]] page for outlines in detail ADA&#039;s involvement in multiple projects etc&lt;br /&gt;
&lt;br /&gt;
= Superseded page =&lt;br /&gt;
* [[ADA Self-Deposit - To Documentation Guides]]&lt;br /&gt;
* [[1. ADA Collection Policy Criteria Assessment]]&lt;br /&gt;
* [[2. Deposit Preparation]]&lt;/div&gt;</summary>
		<author><name>MMitrev</name></author>
	</entry>
	<entry>
		<id>https://docs.ada.edu.au/index.php?title=File:CTS_ADA-NCI_RDS-Storage_V8_2025_10_21_wiki.jpeg&amp;diff=1358</id>
		<title>File:CTS ADA-NCI RDS-Storage V8 2025 10 21 wiki.jpeg</title>
		<link rel="alternate" type="text/html" href="https://docs.ada.edu.au/index.php?title=File:CTS_ADA-NCI_RDS-Storage_V8_2025_10_21_wiki.jpeg&amp;diff=1358"/>
		<updated>2025-10-21T04:53:46Z</updated>

		<summary type="html">&lt;p&gt;MMitrev: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>MMitrev</name></author>
	</entry>
	<entry>
		<id>https://docs.ada.edu.au/index.php?title=ADAPT&amp;diff=1356</id>
		<title>ADAPT</title>
		<link rel="alternate" type="text/html" href="https://docs.ada.edu.au/index.php?title=ADAPT&amp;diff=1356"/>
		<updated>2025-10-20T05:24:33Z</updated>

		<summary type="html">&lt;p&gt;MMitrev: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;ADA Deposit and Preservation Tool (ADAPT) is a web-based tool developed by the Australian Data Archive to ensure that data and metadata in the Archive are programmatically moved between Dataverse instances and ADA’s archival storage. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
    See ADAPT&#039;s implementation within ADA technical architecture - click on the link to the diagram  [https://docs.ada.edu.au/index.php/Main_Page#ADA_Archival_Workflow_Diagram ADA Archival Workflow based on OAIS Reference Model].&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Prior to version 3.0.0 of ADAPT, the scope of the application was strictly only for handling the programmatic moving of data and metadata. The major downside to these prior versions was that the application relied upon the archivists to adhere to the archival workflow as the business rules were not strictly enforced by the application. In addition to this, the state of the dataset was tracked externally to the application with much of the references to dataverse instances, assigned archivists, depositor contacts and related internal ticket numbers having to all be maintained manually. In an effort to amend these manual processes and to introduce enforceable business rules as steps which correlate to the OAIS reference model, ADA completed a major rewrite of the application in Mar 4 2025. &lt;br /&gt;
&lt;br /&gt;
With the focus of ADAPT version 3 being for the application to be a direct representation of the OAIS model, the team developed application enforced steps which each dataset must follow when being processed by the archive. These steps must be followed according to the application&#039;s defined order and cannot be skipped, or ignored, thus ensuring adherence to the model and reducing any risks. For the application to know what processing options are available for the archivist to use on a given dataset, this new version extends the datasets by implementing them as objects with a dynamic reference to where within the OAIS reference model the dataset is up to. By tying the processing options with the dataset&#039;s state ADA can ensure that data and metadata is consistently and only programmatically moved between instances and storage when appropriate. The additional benefit of this is that it allows for tracking the progression of the datasets in an automated and verifiable process. Furthermore, all of the aforementioned references to dataverse instances, assigned archivists, depositor contacts and related internal ticket numbers are all tracked in application too. To ensure all this information is secure, the ADAPT implements OAuth2 login to ensure only approved ADA staff are able to view and manipulate the datasets.&lt;br /&gt;
&lt;br /&gt;
== Web Ontology Language (OWL) ==&lt;br /&gt;
&lt;br /&gt;
The Web Ontology Language (OWL) is a W3C-recommended standard designed as a foundational language for the Semantic Web, which aims to make internet data machine-readable by providing a formal system for knowledge representation. At its core, OWL allows for the explicit, formal specification of an ontology, defining the terminology and the complex relationships within a specific domain of knowledge. For ADAPT, OWL is leveraged not just for static modeling but also for dynamic auditing and version control: the ontology includes specialized classes and properties to define events, actors, and timestamps, effectively creating a formal change log schema. Any modification to the underlying datasets—such as adding, deleting, or altering entities—is automatically captured as a new set of RDF triples conforming to this change log schema and is then persisted to an rdflib graph. This technique ensures that a formal, machine-readable history of all dataset changes is maintained, utilizing the semantic rigor of OWL to provide a structured, queryable audit trail that is then human readable for each dataset in ADAPT.&lt;br /&gt;
&lt;br /&gt;
== Roles ==&lt;br /&gt;
&lt;br /&gt;
{| class=wikitable&lt;br /&gt;
|+ style=&amp;quot;text-align: left;&amp;quot; | ADAPT&#039;s internal roles and their equivalent permissions within ADAPT&lt;br /&gt;
|-&lt;br /&gt;
! scope=col | Role !! scope=col | Permissions&lt;br /&gt;
|-&lt;br /&gt;
| scope=row | User&lt;br /&gt;
| Process a dataset depending on available archival rules.&lt;br /&gt;
|-&lt;br /&gt;
| scope=row | Admin&lt;br /&gt;
| Manage users, and all User powers.&lt;br /&gt;
|-&lt;br /&gt;
| scope=row | Super&lt;br /&gt;
| Manage instances, manage archival steps, and all Admin &amp;amp; User powers.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Software Architecture ==&lt;br /&gt;
&lt;br /&gt;
* [https://www.postgresql.org/ PostgreSQL] Database&lt;br /&gt;
* [https://www.python.org/ Python] Backend&lt;br /&gt;
** Web framework: [https://fastapi.tiangolo.com/ FastAPI]&lt;br /&gt;
** Database toolkit &amp;amp; ORM: [https://sqlmodel.tiangolo.com/ SQLModel]&lt;br /&gt;
* [https://react.dev/ React] Frontend&lt;br /&gt;
** Bundler: [https://vite.dev/ Vite] &lt;br /&gt;
** Component library: [https://ui.shadcn.com/ shadcn]&lt;br /&gt;
* [https://traefik.io/traefik Traefik] Proxy&lt;/div&gt;</summary>
		<author><name>MMitrev</name></author>
	</entry>
	<entry>
		<id>https://docs.ada.edu.au/index.php?title=ADAPT&amp;diff=1355</id>
		<title>ADAPT</title>
		<link rel="alternate" type="text/html" href="https://docs.ada.edu.au/index.php?title=ADAPT&amp;diff=1355"/>
		<updated>2025-10-20T05:23:58Z</updated>

		<summary type="html">&lt;p&gt;MMitrev: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Background ==&lt;br /&gt;
&lt;br /&gt;
ADA Deposit and Preservation Tool (ADAPT) is a web-based tool developed by the Australian Data Archive to ensure that data and metadata in the Archive are programmatically moved between Dataverse instances and ADA’s archival storage. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
    See ADAPT&#039;s implementation within ADA technical architecture - click on the link to the diagram  [https://docs.ada.edu.au/index.php/Main_Page#ADA_Archival_Workflow_Diagram ADA Archival Workflow based on OAIS Reference Model].&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Prior to version 3.0.0 of ADAPT, the scope of the application was strictly only for handling the programmatic moving of data and metadata. The major downside to these prior versions was that the application relied upon the archivists to adhere to the archival workflow as the business rules were not strictly enforced by the application. In addition to this, the state of the dataset was tracked externally to the application with much of the references to dataverse instances, assigned archivists, depositor contacts and related internal ticket numbers having to all be maintained manually. In an effort to amend these manual processes and to introduce enforceable business rules as steps which correlate to the OAIS reference model, ADA completed a major rewrite of the application in Mar 4 2025. &lt;br /&gt;
&lt;br /&gt;
With the focus of ADAPT version 3 being for the application to be a direct representation of the OAIS model, the team developed application enforced steps which each dataset must follow when being processed by the archive. These steps must be followed according to the application&#039;s defined order and cannot be skipped, or ignored, thus ensuring adherence to the model and reducing any risks. For the application to know what processing options are available for the archivist to use on a given dataset, this new version extends the datasets by implementing them as objects with a dynamic reference to where within the OAIS reference model the dataset is up to. By tying the processing options with the dataset&#039;s state ADA can ensure that data and metadata is consistently and only programmatically moved between instances and storage when appropriate. The additional benefit of this is that it allows for tracking the progression of the datasets in an automated and verifiable process. Furthermore, all of the aforementioned references to dataverse instances, assigned archivists, depositor contacts and related internal ticket numbers are all tracked in application too. To ensure all this information is secure, the ADAPT implements OAuth2 login to ensure only approved ADA staff are able to view and manipulate the datasets.&lt;br /&gt;
&lt;br /&gt;
== Web Ontology Language (OWL) ==&lt;br /&gt;
&lt;br /&gt;
The Web Ontology Language (OWL) is a W3C-recommended standard designed as a foundational language for the Semantic Web, which aims to make internet data machine-readable by providing a formal system for knowledge representation. At its core, OWL allows for the explicit, formal specification of an ontology, defining the terminology and the complex relationships within a specific domain of knowledge. For ADAPT, OWL is leveraged not just for static modeling but also for dynamic auditing and version control: the ontology includes specialized classes and properties to define events, actors, and timestamps, effectively creating a formal change log schema. Any modification to the underlying datasets—such as adding, deleting, or altering entities—is automatically captured as a new set of RDF triples conforming to this change log schema and is then persisted to an rdflib graph. This technique ensures that a formal, machine-readable history of all dataset changes is maintained, utilizing the semantic rigor of OWL to provide a structured, queryable audit trail that is then human readable for each dataset in ADAPT.&lt;br /&gt;
&lt;br /&gt;
== Roles ==&lt;br /&gt;
&lt;br /&gt;
{| class=wikitable&lt;br /&gt;
|+ style=&amp;quot;text-align: left;&amp;quot; | ADAPT&#039;s internal roles and their equivalent permissions within ADAPT&lt;br /&gt;
|-&lt;br /&gt;
! scope=col | Role !! scope=col | Permissions&lt;br /&gt;
|-&lt;br /&gt;
| scope=row | User&lt;br /&gt;
| Process a dataset depending on available archival rules.&lt;br /&gt;
|-&lt;br /&gt;
| scope=row | Admin&lt;br /&gt;
| Manage users, and all User powers.&lt;br /&gt;
|-&lt;br /&gt;
| scope=row | Super&lt;br /&gt;
| Manage instances, manage archival steps, and all Admin &amp;amp; User powers.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Software Architecture ==&lt;br /&gt;
&lt;br /&gt;
* [https://www.postgresql.org/ PostgreSQL] Database&lt;br /&gt;
* [https://www.python.org/ Python] Backend&lt;br /&gt;
** Web framework: [https://fastapi.tiangolo.com/ FastAPI]&lt;br /&gt;
** Database toolkit &amp;amp; ORM: [https://sqlmodel.tiangolo.com/ SQLModel]&lt;br /&gt;
* [https://react.dev/ React] Frontend&lt;br /&gt;
** Bundler: [https://vite.dev/ Vite] &lt;br /&gt;
** Component library: [https://ui.shadcn.com/ shadcn]&lt;br /&gt;
* [https://traefik.io/traefik Traefik] Proxy&lt;/div&gt;</summary>
		<author><name>MMitrev</name></author>
	</entry>
</feed>