Skip to content

Commit e991638

Browse files
committed
Fix formatting
1 parent 6f2c5d3 commit e991638

File tree

2 files changed

+52
-54
lines changed

2 files changed

+52
-54
lines changed

docs/side_quests/metadata.md

Lines changed: 37 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -870,53 +870,53 @@ Applying this pattern in your own work will enable you to build robust, maintain
870870

871871
### Key patterns
872872

873-
1. **Reading and Structuring Metadata:** Reading CSV files and creating organized metadata maps that stay associated with your data files.
874-
875-
```groovy
876-
channel.fromPath('samplesheet.csv')
877-
.splitCsv(header: true)
878-
.map { row ->
879-
[ [id:row.id, character:row.character], row.recording ]
880-
}
881-
```
873+
1. **Reading and Structuring Metadata:** Reading CSV files and creating organized metadata maps that stay associated with your data files.
882874

883-
2. **Expanding Metadata During Workflow** Adding new information to your metadata as your pipeline progresses by adding process outputs and deriving values through conditional logic
875+
```groovy
876+
channel.fromPath('samplesheet.csv')
877+
.splitCsv(header: true)
878+
.map { row ->
879+
[ [id:row.id, character:row.character], row.recording ]
880+
}
881+
```
884882

885-
- Adding new keys based on process output
883+
2. **Expanding Metadata During Workflow** Adding new information to your metadata as your pipeline progresses by adding process outputs and deriving values through conditional logic
886884

887-
```groovy
888-
.map { meta, file, lang ->
889-
[ meta + [lang:lang], file ]
890-
}
891-
```
885+
- Adding new keys based on process output
892886

893-
- Adding new keys using a conditional clause
887+
```groovy
888+
.map { meta, file, lang ->
889+
[ meta + [lang:lang], file ]
890+
}
891+
```
894892

895-
```groovy
896-
.map{ meta, file ->
897-
if ( meta.lang.equals("de") || meta.lang.equals('en') ){
898-
lang_group = "germanic"
899-
} else if ( meta.lang in ["fr", "es", "it"] ) {
900-
lang_group = "romance"
901-
} else {
902-
lang_group = "unknown"
903-
}
904-
}
905-
```
893+
- Adding new keys using a conditional clause
906894

907-
3. **Customizing Process Behavior:** Using metadata to adapt how processes handle different files
895+
```groovy
896+
.map{ meta, file ->
897+
if ( meta.lang.equals("de") || meta.lang.equals('en') ){
898+
lang_group = "germanic"
899+
} else if ( meta.lang in ["fr", "es", "it"] ) {
900+
lang_group = "romance"
901+
} else {
902+
lang_group = "unknown"
903+
}
904+
}
905+
```
908906

909-
- Using meta values in Process Directives
907+
3. **Customizing Process Behavior:** Using metadata to adapt how processes handle different files
910908

911-
```groovy
912-
publishDir "results/${meta.lang_group}", mode: 'copy'
913-
```
909+
- Using meta values in Process Directives
914910

915-
- Adapting tool parameters for individual files
911+
```groovy
912+
publishDir "results/${meta.lang_group}", mode: 'copy'
913+
```
916914

917-
```groovy
918-
cat $input_file | cowpy -c ${meta.character} > cowpy-${input_file}
919-
```
915+
- Adapting tool parameters for individual files
916+
917+
```groovy
918+
cat $input_file | cowpy -c ${meta.character} > cowpy-${input_file}
919+
```
920920

921921
### Additional resources
922922

docs/side_quests/splitting_and_grouping.md

Lines changed: 15 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1110,7 +1110,7 @@ Mastering these channel operations will enable you to build flexible, scalable p
11101110

11111111
### Key patterns
11121112

1113-
1. **Creating structured input data**: Starting from a CSV file with meta maps (building on patterns from [Metadata in workflows](./metadata.md))
1113+
1. **Creating structured input data:** Starting from a CSV file with meta maps (building on patterns from [Metadata in workflows](./metadata.md))
11141114

11151115
```groovy
11161116
ch_samples = channel.fromPath("./data/samplesheet.csv")
@@ -1120,14 +1120,13 @@ Mastering these channel operations will enable you to build flexible, scalable p
11201120
}
11211121
```
11221122

1123-
2. **Splitting data into separate channels**: We used `filter` to divide data into independent streams based on the `type` field
1123+
2. **Splitting data into separate channels:** We used `filter` to divide data into independent streams based on the `type` field
11241124

11251125
```groovy
1126-
// Filter channel based on condition
11271126
channel.filter { it.type == 'tumor' }
11281127
```
11291128

1130-
3. **Joining matched samples**: We used `join` to recombine related samples based on `id` and `repeat` fields
1129+
3. **Joining matched samples:** We used `join` to recombine related samples based on `id` and `repeat` fields
11311130

11321131
- Join two channels by key (first element of tuple)
11331132

@@ -1153,33 +1152,32 @@ Mastering these channel operations will enable you to build flexible, scalable p
11531152
)
11541153
```
11551154

1156-
4. **Distributing across intervals**: We used `combine` to create Cartesian products of samples with genomic intervals for parallel processing
1155+
4. **Distributing across intervals:** We used `combine` to create Cartesian products of samples with genomic intervals for parallel processing
11571156

11581157
```groovy
11591158
samples_ch.combine(intervals_ch)
11601159
```
11611160

1162-
5. **Aggregating by grouping keys**: We used `groupTuple` to group by the first element in each tuple, thereby collecting samples sharing `id` and `interval` fields and merging technical replicates
1161+
5. **Aggregating by grouping keys:** We used `groupTuple` to group by the first element in each tuple, thereby collecting samples sharing `id` and `interval` fields and merging technical replicates
11631162

11641163
```groovy
1165-
//
11661164
channel.groupTuple()
11671165
```
11681166

1169-
- **Optimizing the data structure:** We used `subMap` to extract specific fields and created a named closure for making transformations reusable
1167+
6. **Optimizing the data structure:** We used `subMap` to extract specific fields and created a named closure for making transformations reusable
11701168

1171-
- Extract specific fields from a map
1169+
- Extract specific fields from a map
11721170

1173-
```groovy
1174-
meta.subMap(['id', 'repeat'])
1175-
```
1171+
```groovy
1172+
meta.subMap(['id', 'repeat'])
1173+
```
11761174

1177-
- Named closure for reusable transformations
1175+
- Named closure for reusable transformations
11781176

1179-
```groovy
1180-
getSampleIdAndReplicate = { meta, file -> [meta.subMap(['id', 'repeat']), file] }
1181-
channel.map(getSampleIdAndReplicate)
1182-
```
1177+
```groovy
1178+
getSampleIdAndReplicate = { meta, file -> [meta.subMap(['id', 'repeat']), file] }
1179+
channel.map(getSampleIdAndReplicate)
1180+
```
11831181

11841182
## Additional resources
11851183

0 commit comments

Comments
 (0)