Workflows: Difference between revisions

From ADA Public Wiki
Jump to navigation Jump to search
 
(7 intermediate revisions by 4 users not shown)
Line 1: Line 1:
= Assessment for Suitability of Deposit =
==Deposit==


When a prospective depositor has made contact with the ADA, the deposit request is assessed by the director or deputy director of the ADA for suitability (see [[Deposit Appraisal & Collection Policy]]).
ADA provides deposit guidelines on the public website [101] and the ADA wiki “Quick Deposit Guide” [5].  When a depositor contacts the ADA, the proposed deposit is appraised by the ADA for suitability [22]. Once the deposit has been provisionally accepted, an ADA archivist will set up a deposit shell on the ADA Deposit Dataverse [94] instance. The ADA archival workflow is managed across three separate Dataverse installations programmatically by ADAPT [6] to ensure data integrity [37] as data is moved between Dataverse instances. Depositors are instructed to upload all data and supporting documentation files to their dataset shell on the ADA Deposit Dataverse. An ADA archivist will prompt the depositor to complete the DDI metadata fields [93] on the deposit shell. The archivist will further correspond with the depositor if further information is needed to create complete documentation for their data, based on ADA requirements [33].  


==Data Processing==
= Upload of Data and Documentation =


== Deposit Shell ==
When the deposit shell is created using ADA’s ADAPT application [6], each deposit is assigned a unique six-digit ADA Identification (ADAID) number. The purpose of ADAPT is to ensure data provenance and authenticity [31] is maintained.  ADAPT copies the deposited files to an archive directory identified by the same unique ADAID number as the Submission Information Package (SIP). Within the SIP, the initial draft deposit remains unchanged so that a complete end-to-end audit trail can be always maintained. The archivists create a copy of the data for curation and processing. Archival and working directories are accessed via a secure Remote Desktop Service (RDS), with storage and infrastructure managed by the NCI [37].
Once the deposit has been provisionally accepted, an ADA archivist will set up a deposit shell on the ADA Deposit Dataverse site. The Deposit Dataverse is the first of three instances of Dataverse used in the archival process by the ADA. The three Dataverse installations (Deposit/Test/Production) are isolated from one another, with only the [https://dataverse.ada.edu.au/ Production Dataverse], the third instance, publicly accessible. See [[Storage & Integrity]] on how and where data is stored.


The deposit shell simply looks like an empty version of a dataset on the [https://dataverse.ada.edu.au/ Production Dataverse]. Other secure file sharing solutions are allowed, however, this should be discussed with the ADA first. For security reason do not send data files by email.
Archivists curate and process the data and documentation as agreed with the depositor. The level of curation [102] may depend on the type of data (e.g., quantitative or qualitative), the perceived value of the data to the designated community, its sensitivity, or other factors as determined in consultation with the depositor. The archivist will check for disclosure risk covered under rights management [26] and liaise with the depositor about how best to mitigate any risks identified. Data will also be checked for re-usability [36], including appropriate metadata and consistent mapping to supporting documentation such as data dictionaries or user guides. Proposed changes to the data are detailed in the ADA Processing Report sent to the data depositor or data custodian for approval prior to the changes being made. All agreed changes are recorded in the curation syntax as part of the Archival Information Package (AIP) [31].  


== Data upload ==  
==Review and Publication==  
To the Deposit Dataverse, Data-rights holder (or their authorised data depositor) uploads the data files and supporting documentation (e.g. questionnaires, technical reports). The ADA archivist will prompt the depositor to fill in the DDI metadata fields in the Deposit Dataverse as well. The ADA will contact the depositor if further information is needed to create complete documentation for their data (See [[Quality Assurance]] for more details). 


== Data processing ==
Once all agreed changes to the data and metadata have been made, the archivist will set up a preview on the Test Dataverse instance that reflects the intended final ‘published’ version of both the metadata and files. Once the data custodian/depositor has approved the preview, it is duplicated on the Production Dataverse instance using the ADAPT tool. Here the data is published, searchable, and available for access request. DDI metadata is always publicly accessible, as is all project documentation files (unless depositors have specified otherwise). Data access is typically restricted and can be downloaded subject to data access criteria [8], including at minimum an ADA account with a verified institutional email and sufficient responses to any “Guestbook” questions (subject to the ADA License Agreement and Terms of Access; see section Rights Management [26] for details). Access criteria are recorded on ADA’s internal wiki (not publicly available) for reference by access management staff.  
From here, [[ADAPT]] assigns each draft deposit a unique six-digit ADA Identification (ADAID) number. The complete draft submission of the data is then stored by ADAPT to an archive folder structure with the same unique ADAID number hosted by the National Computational Infrastructure ([https://nci.org.au/ NCI]) as the Submission Information Package (SIP). Within the SIP, the initial draft deposit remains unchanged so that a complete end-to-end audit trail can be maintained at all times. The archivists use a copy of the data to perform updates and amendments to the material as required. The NCI storage and working areas are accessed via a Remote Desktop Service (RDS) that is managed by the NCI.  


Trained ADA archivist staff (see [[Expertise & Guidance]]) can perform various levels of curation as agreed with the data owner/depositor. The level of curation may depend on the type of dataset (quantitative or qualitative) deposited, the importance of the dataset and its confidentiality (government of longitudinal data), or other factors as determined in consultation with the data depositor. For all types of data the ADA archivist will check for privacy risks and liaise with the depositor about how best to mitigate them. For qualitative data, this could mean that only the transcript of an interview is published (without the recording) or even just an interview summary, depending in the level of sensitivity in the data. For quantitative data, variables with a particularly high re-identification risk can be relegated to a separate file, which will be published with additional access conditions. All data will be checked for re-usability, e.g. clear labels. In addition to that, tabular data is exported to SPSS, STATA, SAS und CSV formats for publication. All proposed changes to the data are captured in a Processing Report for the deposit. This report is sent to the Data Owner’s for approval prior to the changes being made. All agreed changes are recorded in the curation syntax (SPSS or R).
Changes or updates to the data files of an already published deposit are handled via the above deposit and processing workflows. Changes are automatically version controlled in Dataverse. Major changes, that is a change to the data, result in a full versioning (i.e. Version 1.0 becomes Version 2.0), while a minor change such as the addition of metadata results in a sub-versioning (i.e. from Version 1.0 to Version 1.1).


== Review and Publication ==
==Preservation==
Once all agreed changes to the data and metadata have been made, the ADA archivist will set up a preview page on the second instance of Dataverse, the Test Dataverse, that reflects the current state of the metadata and files. The data owner/depositor will be provided with a private URL to review the data. Once the data owner/depositor has approved the preview version, it is copied to the third instance of Dataverse, the Production Dataverse. On this instance of Dataverse, the data is published and can be requested by external users. For all datasets, the metadata is freely available for viewing. Data itself can be downloaded subject to the Data Owner’s licensing agreement and the user fulfilment of the data access criteria (see [[Setting Access Conditions]]. This usually involves providing a verified email address and answering a number of guestbook questions. The access criteria for each dataset are formalised as Business Rules and are updated and stored on the ADA’s internal wiki site (These pages are not publicly available).


Changes or updates to the data files of an already published deposit are treated like a new deposit, i.e. a new SIP, AIP and DIP are created. Changes to published datasets are also automatically version controlled through the Dataverse application. Major changes, that is a change to the data, result in a new version release (i.e. Version 0.0 becomes Version 1.0), whilst a minor change such as the addition of metadata results in a sub-version uplift (i.e. from Version 0.0 to Version 0.1), see [[Provenance and authenticity]].  
At publication, preservation versions of the DDI metadata are exported using the Dataverse export functionality in ADAPT. The metadata is stored in a preservation sub-directory with that deposit’s ADAID in the archive directory, along with a copy of the published SPSS data file(s) and SPSS syntax. The Preservation Plan [9] outlines how ADA manages long term preservation of data and metadata for reuse.


== Preservation ==
==Adjusting Workflows, Decision Handling, and Change Management==
After a dataset is successfully published on the ADA Dataverse for access, preservation versions of the DDI metadata are exported using the Dataverse export functionality.  The metadata is stored in the preservation directory, along with a copy of the published SPSS data file(s) and SPSS syntax. [https://docs.ada.edu.au/index.php/Preservation_plan  The Preservation Plan] outlines how ADA manages long term preservation of data and metadata for reuse.


= Data Curation Process =
The ADA Archive Team meets weekly with the ADA Director to discuss workflows and make decisions as required. Meetings follow an updated agenda, with outcomes and actions documented. Projects are managed in ADA GitHub or through ANU Microsoft SharePoint platforms.
== Submission Information Package ==
Draft deposits are each assigned a unique six-digit ADA Identification (ADAID) number. The complete draft submission is then saved to an archive folder structure with the same unique ADAID number hosted by the National Computational Infrastructure (NCI) as the Submission Information Package (SIP). Within the SIP, the initial draft deposit remains unchanged so that a complete end-to-end audit trail can be maintained at all times. The archivists uses a copy of the data to perform updates and amendments to the material as required. The NCI storage and working areas are accessed via a Remote Desktop Service (RDS) that is managed by the NCI.  


=References =
== Data Processing ==
Trained ADA archivist staff can perform various levels of curation as agreed with the data owner/depositor. The level of curation may depend on the type of dataset (quantitative or qualitative) deposited, the importance of the dataset and its confidentiality (government of longitudinal data), or other factors as determined in consultation with the data depositor. All proposed changes to the data are captured in a Processing Report for the deposit. This report is sent to the Data Owner’s for approval prior to the changes being made. All agreed changes are tracked and retraceable in the curation syntax (SPSS or R). The processed data and supporting documentation files are converted to preservation formats suitable for long term storage and are saved in the archive file structure as the Archival Information Package (AIP). The Processing Reports are also retained in the archive and form part of the AIP. Approved changes can also be made to the data, supporting information and metadata, by the Data Owner (or if authorised the data depositor) when the information is still in a draft format in the Deposit Dataverse if required. All copies of syntax and superseded data/documents are also retained in an archival form as part of the AIP. 


= Review of Data and Metadata =
[101] ADA website - depositing data - (https://ada.edu.au/depositing-data/)
== Cross Check ==
Once all agreed changes to the data and metadata have been made, the ADA archivist will set up a preview page on the second instance of Dataverse, the Test Dataverse, that reflects the current state of the metadata and files. The data owner/depositor will be provided with a private URL to review the data.


== License, Terms & Conditions, Access conditions ==
[5] Deposit guidelines – (https://docs.ada.edu.au/index.php/Quick_Deposit_Guide)
Before a dataset can be published, the data owner or data rights holder has to sign the license forms, see [[Rights Management]]. In these documents to terms & conditions for the data a specified and the access conditions are set out.  


[22] Deposit Appraisal & Collection Policy - (https://docs.ada.edu.au/index.php/Deposit_Appraisal_%26_Collection_Policy)
= Publication =
Once the data owner/depositor has approved the preview version, it is copied to the third instance of Dataverse, the [https://dataverse.ada.edu.au/ Production Dataverse]. On this instance of Dataverse, the data is published and can be requested by external users.  


Changes made to published datasets are version controlled and stored within the NCI File structure as part of the AIP. Changes to published Datasets are also automatically version controlled through the Dataverse application. Major changes, that is a change to the data, result in a new version release (i.e. Version 0.0 becomes Version 1.0), whilst a minor change such as the addition metadata results in a sub-version uplift (i.e. from Version 0.0 to Version 0.1).
[94] ADA Deposit Dataverse - (https://deposit.ada.edu.au)


For all datasets, the metadata is freely available for viewing. Data itself can be downloaded subject to the Data Owner’s licensing agreement and the user fulfilment of the data access criteria. This usually involves providing a verified email address and answering a number of guestbook questions. The access criteria for each dataset are formalised as Business Rules and are updated and stored on the ADA’s internal wiki site (These pages are not publicly available).
[6] ADAPT - (https://docs.ada.edu.au/index.php/ADAPT)
 
[37] Storage & Integrity - (https://docs.ada.edu.au/index.php/Storage_%26_Integrity)
 
[93] Metadata guidelines for ADA Dataverse - (https://docs.ada.edu.au/index.php/Metadata_guidelines_for_ADA_Dataverse)
 
[33] Quality Assurance - (https://docs.ada.edu.au/index.php/Quality_Assurance)
 
[31] Provenance and authenticity - (https://docs.ada.edu.au/index.php/Provenance_and_authenticity)
 
[102] Levels of curation - (https://docs.ada.edu.au/index.php/Quick_Deposit_Guide#Levels_of_Curation)
 
[26] Rights Management - (https://docs.ada.edu.au/index.php/Rights_Management)
 
[36] Reuse - (https://docs.ada.edu.au/index.php/Reuse)
 
[31] Provenance and authenticity - (https://docs.ada.edu.au/index.php/Provenance_and_authenticity)
 
[8] Setting Access conditions - (https://docs.ada.edu.au/index.php/Setting_Access_Conditions)
 
[9] Preservation Plan – (https://docs.ada.edu.au/index.php/Preservation_plan)

Latest revision as of 00:00, 4 December 2025

Deposit

ADA provides deposit guidelines on the public website [101] and the ADA wiki “Quick Deposit Guide” [5]. When a depositor contacts the ADA, the proposed deposit is appraised by the ADA for suitability [22]. Once the deposit has been provisionally accepted, an ADA archivist will set up a deposit shell on the ADA Deposit Dataverse [94] instance. The ADA archival workflow is managed across three separate Dataverse installations programmatically by ADAPT [6] to ensure data integrity [37] as data is moved between Dataverse instances. Depositors are instructed to upload all data and supporting documentation files to their dataset shell on the ADA Deposit Dataverse. An ADA archivist will prompt the depositor to complete the DDI metadata fields [93] on the deposit shell. The archivist will further correspond with the depositor if further information is needed to create complete documentation for their data, based on ADA requirements [33].

Data Processing

When the deposit shell is created using ADA’s ADAPT application [6], each deposit is assigned a unique six-digit ADA Identification (ADAID) number. The purpose of ADAPT is to ensure data provenance and authenticity [31] is maintained. ADAPT copies the deposited files to an archive directory identified by the same unique ADAID number as the Submission Information Package (SIP). Within the SIP, the initial draft deposit remains unchanged so that a complete end-to-end audit trail can be always maintained. The archivists create a copy of the data for curation and processing. Archival and working directories are accessed via a secure Remote Desktop Service (RDS), with storage and infrastructure managed by the NCI [37].

Archivists curate and process the data and documentation as agreed with the depositor. The level of curation [102] may depend on the type of data (e.g., quantitative or qualitative), the perceived value of the data to the designated community, its sensitivity, or other factors as determined in consultation with the depositor. The archivist will check for disclosure risk covered under rights management [26] and liaise with the depositor about how best to mitigate any risks identified. Data will also be checked for re-usability [36], including appropriate metadata and consistent mapping to supporting documentation such as data dictionaries or user guides. Proposed changes to the data are detailed in the ADA Processing Report sent to the data depositor or data custodian for approval prior to the changes being made. All agreed changes are recorded in the curation syntax as part of the Archival Information Package (AIP) [31].

Review and Publication

Once all agreed changes to the data and metadata have been made, the archivist will set up a preview on the Test Dataverse instance that reflects the intended final ‘published’ version of both the metadata and files. Once the data custodian/depositor has approved the preview, it is duplicated on the Production Dataverse instance using the ADAPT tool. Here the data is published, searchable, and available for access request. DDI metadata is always publicly accessible, as is all project documentation files (unless depositors have specified otherwise). Data access is typically restricted and can be downloaded subject to data access criteria [8], including at minimum an ADA account with a verified institutional email and sufficient responses to any “Guestbook” questions (subject to the ADA License Agreement and Terms of Access; see section Rights Management [26] for details). Access criteria are recorded on ADA’s internal wiki (not publicly available) for reference by access management staff.

Changes or updates to the data files of an already published deposit are handled via the above deposit and processing workflows. Changes are automatically version controlled in Dataverse. Major changes, that is a change to the data, result in a full versioning (i.e. Version 1.0 becomes Version 2.0), while a minor change such as the addition of metadata results in a sub-versioning (i.e. from Version 1.0 to Version 1.1).

Preservation

At publication, preservation versions of the DDI metadata are exported using the Dataverse export functionality in ADAPT. The metadata is stored in a preservation sub-directory with that deposit’s ADAID in the archive directory, along with a copy of the published SPSS data file(s) and SPSS syntax. The Preservation Plan [9] outlines how ADA manages long term preservation of data and metadata for reuse.

Adjusting Workflows, Decision Handling, and Change Management

The ADA Archive Team meets weekly with the ADA Director to discuss workflows and make decisions as required. Meetings follow an updated agenda, with outcomes and actions documented. Projects are managed in ADA GitHub or through ANU Microsoft SharePoint platforms.

References

[101] ADA website - depositing data - (https://ada.edu.au/depositing-data/)

[5] Deposit guidelines – (https://docs.ada.edu.au/index.php/Quick_Deposit_Guide)

[22] Deposit Appraisal & Collection Policy - (https://docs.ada.edu.au/index.php/Deposit_Appraisal_%26_Collection_Policy)

[94] ADA Deposit Dataverse - (https://deposit.ada.edu.au)

[6] ADAPT - (https://docs.ada.edu.au/index.php/ADAPT)

[37] Storage & Integrity - (https://docs.ada.edu.au/index.php/Storage_%26_Integrity)

[93] Metadata guidelines for ADA Dataverse - (https://docs.ada.edu.au/index.php/Metadata_guidelines_for_ADA_Dataverse)

[33] Quality Assurance - (https://docs.ada.edu.au/index.php/Quality_Assurance)

[31] Provenance and authenticity - (https://docs.ada.edu.au/index.php/Provenance_and_authenticity)

[102] Levels of curation - (https://docs.ada.edu.au/index.php/Quick_Deposit_Guide#Levels_of_Curation)

[26] Rights Management - (https://docs.ada.edu.au/index.php/Rights_Management)

[36] Reuse - (https://docs.ada.edu.au/index.php/Reuse)

[31] Provenance and authenticity - (https://docs.ada.edu.au/index.php/Provenance_and_authenticity)

[8] Setting Access conditions - (https://docs.ada.edu.au/index.php/Setting_Access_Conditions)

[9] Preservation Plan – (https://docs.ada.edu.au/index.php/Preservation_plan)