@MFoss19
Thank you very much for this contribution to Drill and nice work! I left some review comments. General comment would be to please remove any extraneous commented out code and Drill does have an automated style sheet which you can install for Intellij or Eclipse.
Also, to pass the CI and merge this, all the files will need to have the Apache license in them at the beginning of the file. The only exception would be the fwf test files. For that, you're going to have to add an exclusion to the RAT check in the root pom.xml
file.
119 | |||
120 | |||
121 | new RowSetComparison(expected).verifyAndClearAll(results); | ||
122 | System.out.println("Test complete."); |
Please add tests for:
92 | |||
93 | @Override | ||
94 | public boolean next() { // Use loader to read data from file to turn into Drill rows | ||
95 |
There is a small issue with the next()
function as written. You define maxRecords
but don't do anything with it. The maxRecords
is the LIMIT
which gets pushed down from the query. The idea being that if a user does a SELECT ... LIMIT 10
your reader should stop reading as soon as that limit is reached. Which means that the next
function should return false
when the limit has been reached. The good news is that your writer
object actually has a method called limitReached(<maxRecords>)
which will track this for you.
Another thing you might consider doing which would clean up the code a bit would be to make the parseLine
method return true
if there is more data to read, false
if not. Also move the writer start and end to that method, then you could have a next
method that looks like this:
@Override
public boolean next() {
recordCount = 0;
while (!writer.isFull()) {
if (!parseLine(writer)) {
return false;
}
}
return true;
}
Fixed use of MaxRecords in next() function.
Still looking at moving writer start and end to next.
143 | return builder.buildSchema(); | ||
144 | } | ||
145 | |||
146 | private void parseLine(String line, RowSetLoader writer) { |
See comment above but I'd recommend making this method return a boolean value.
179 | writer.scalar(i).setTimestamp(timeStamp); | ||
180 | break; | ||
181 | default: | ||
182 | throw new RuntimeException("Unknown data type specified in fixed width. Found data type " + dataType); |
Please convert this to a UserException
.
65 | return EasyFormatConfig.builder() | ||
66 | .readable(true) | ||
67 | .writable(false) | ||
68 | .blockSplittable(false) |
I think this actually might be blocksplittable.
@MFoss19, thanks much for the contribution. This should be very helpful for folks.
My comments are about some of the nuances of creating a storage plugin. I reference a couple of Drill internals. Let me know if you want more details about these.
42 | this.fieldName = fieldName; | ||
43 | this.dateTimeFormat = dateTimeFormat; | ||
44 | this.startIndex = startIndex; | ||
45 | this.fieldWidth = fieldWidth; |
Since configs are created by hand, having good defaults is helpful. Perhaps:
name
: required; throw an exception if blank, or if the stripped name is not a valid SQL symbol.type
: default to VARCHAR
dateTimeFormat
: null
is allowed, so no default.index
: required, must be >= 0.width
: either required, or can be optional. If provided must be > 0. (See below.)For this plugin, we also have to check the set of fields.
We could be clever. Scan all fields and sort into ascending order. If a field omits the width, just compute it from the index of this and the next field.
33 | private final int startIndex; | ||
34 | private final int fieldWidth; | ||
35 | |||
36 | public FixedwidthFieldConfig(@JsonProperty("dataType") TypeProtos.MinorType dataType, |
Does it work to use the MinorType
here? Is that type set up for Jackson serialization? I don't know the answer, just noting we should double-check to ensure it works OK.
This was tested configuring the plugin within the Drill UI (via manual test). I will also add an automated unit test for parsing the Json. To answer your question, yes it works to use MinorType here. Jackson can always read in Java enums, MinorType was generated as part of our protobuf and for this type of data they generate proper Java enums.
64 | return EasyFormatConfig.builder() | ||
65 | .readable(true) | ||
66 | .writable(false) | ||
67 | .blockSplittable(false) |
I'm pretty sure fix-width files are splittable. If every records resides on a single line, they the file is spittable if we add code that, if the start offset !=0, scan to the next newline. And, on read, read rows until the file position is greater than the block end. See the text file (CSV) plugin for details (though, don't follow its implementation as that implementation is rather unique to that one use case.)
This uses "EVF V1". Your plugin provides schema information, and is thus a perfect fit for "EVF V2" which can use your schema information to set up the row set loader schema automatically for you.
@paul-rogers Is there an example somewhere of how to use the rowset loader to set up the schema automatically? Is this as simple as checking to see whether the schema is provided and if so, use that?
@cgivre, it's been a while since I looked at that stuff. As I recall, the CSV reader was converted, but it is a bit obscure. The unit test also show what can be done, IIRC.
Basically, V2 gets schema from multiple sources:
columns
array)The V2 SchemaNegotiator
provides the needed methods. V2 will then come up with the final schema. V2 lets you read the entire row (columns a, b, c, say) even if the query wants only column a: V2 silently discards the unwanted columns.
142 | for (FixedwidthFieldConfig field : config.getFields()) { | ||
143 | value = line.substring(field.getStartIndex() - 1, field.getStartIndex() + field.getFieldWidth() - 1); | ||
144 | dataType = field.getDataType(); | ||
145 | dateTimeFormat = field.getDateTimeFormat(); |
This is OK, but slow because of the switch. There is a set of field converter classes which can handle the string-to-whatever conversions. With that, there is a direct call per field (inner loop) from reading the field to convert to write into value vectors. The field-specific-type switch is done only once, at setup time.
These converters are used in the CSV reader when a schema is provided. I can dig up more examples if helpful.
To be clearer: what we want is to minimize the per-field work. Ideally, we'd set up an array of column converters so the loop looks like:
for (int i = 0; i < config.getFields().size(); i++) {
FixedwidthFieldConfig field = config.getFields().get(i);
value = line.substring(field.getIndex() - 1, field.getIndex() + field.getWidth() - 1);
writer.scalar(i).setString(value)
}
However, the above requires the user to specify the config. For every file. On every Drill cluster. Better, if the schema is not given, infer it from the first (few) row(s). Then, set up runtime field objects:
```java
for (FieldReader fieldReader : fieldReaders) {
fieldReader.load(line);
}
The field reader:
@cgivre mentioned the idea of a column converter. There is a defined set for the common cases. The underlying mechanism sets them up for you. (V2 makes it simpler.) That way, a call to setString()
directly invokes the thing that converts from string and writes the resulting value: no per-column switch needed.
This pull request introduces 2 alerts when merging 2d17f1b into 0c9451e - view on LGTM.com
new alerts:
This pull request introduces 2 alerts when merging 18380ea into f4ea90c - view on LGTM.com
new alerts:
This pull request introduces 2 alerts when merging a91be4c into f4ea90c - view on LGTM.com
new alerts:
This pull request introduces 2 alerts when merging dc60d28 into b6da35e - view on LGTM.com
new alerts:
This pull request introduces 2 alerts when merging 05ae3f1 into 58ced60 - view on LGTM.com
new alerts:
138 | private boolean parseLine(String line, RowSetLoader writer) throws IOException { | ||
139 | int i = 0; | ||
140 | TypeProtos.MinorType dataType; | ||
141 | String dateTimeFormat; |
@cgivre @MFoss19 @estherbuchwalter here we are reading column data types from the format config, where we also specify their start and stop offsets. But this format plugin can also accept data types from a provided schema. So my question is: can we drop the data type information from the format config so that we don't introduce multiple ad-hoc ways of specifying this info? This is genuinely a question because I don't know this subject well, but should we not work with data type specs here exactly the same way we do for CSV (cannot be provided in the format config I don't think)?
My original understanding of this was that for the fixed width plugin was that it would work in a similar manner to the log regex reader where the user provides the schema in the config, either in the format config or at query time using the table()
function.
What you want to do is to resolve the schema at open time, not when parsing. At open time, you can:
Since this is a fixed format, we don't want to rediscover the schema on every line: that costs too much. (Think of the case of reading 100M or 1B rows: optimizing the inner loop is critical.)
This pull request introduces 2 alerts when merging 9d66f91 into 52838ef - view on LGTM.com
new alerts:
This pull request introduces 2 alerts when merging 9b95c45 into 52838ef - view on LGTM.com
new alerts:
@MFoss19 @estherbuchwalter following some recent chat with @paul-rogers and my last comment here, how about a reduced format config such as the following? The goal is to get to something terse and consistent with what we do for other text formats.
"fixedwidth": {
"type": "fixedwidth",
"extensions": [
"fwf"
],
"extractHeader": true,
"trimStrings": true,
"columnOffsets": [1, 11, 21, 31],
"columnWidths": [10, 10, 10, 10]
}
Column names and types can already come from a provided schema or aliasing after calls to CAST()
. Incidentally, the settings above can be overriden per query using a provided schema too.
There's also a part of that wonders whether we could have justified adding our fixed width functionality to the existing delimited text format reader.
@MFoss19 @estherbuchwalter following some recent chat with @paul-rogers and my last comment here, how about a reduced format config such as the following? The goal is to get to something terse and consistent with what we do for other text formats.
"fixedwidth": { "type": "fixedwidth", "extensions": [ "fwf" ], "extractHeader": true, "trimStrings": true, "columnOffsets": [1, 11, 21, 31], "columnWidths": [10, 10, 10, 10] }Column names and types can already come from a provided schema or aliasing after calls to
CAST()
. Incidentally, the settings above can be overriden per query using a provided schema too.There's also a part of that wonders whether we could have justified adding our fixed width functionality to the existing delimited text format reader.
@dzamo In this case, I'd respectfully disagree here. In effect, the configuration is providing a schema to the user, similar to the way the logRegex reader works. In this case, the user will get the best data possible if we can include datatypes and field names in the schema, so that they can just do a SELECT *
and not have to worry about casting etc.
Let's consider a real world use case: some fixed width log generated by a database. Since the fields may be mashed together, there isn't a delimiter that you can use to divide the fields. You could use however the logRegex reader to do this. That point aside for the moment, the way I imagined someone using this was that different configs could be set up and linked to workspaces such that if a file was in the mysql_logs
folder, it would use the mysql log config, and if it was in the postgres
it would use another.
My opinion here is that the goal should be to get the cleanest data to the user as possible without the user having to rely on CASTs and other complicating factors.
Let's consider a real world use case: some fixed width log generated by a database. Since the fields may be mashed together, there isn't a delimiter that you can use to divide the fields. You could use however the logRegex reader to do this. That point aside for the moment, the way I imagined someone using this was that different configs could be set up and linked to workspaces such that if a file was in the
mysql_logs
folder, it would use the mysql log config, and if it was in thepostgres
it would use another.
@cgivre This use case would still work after two CREATE SCHEMA
statements to set the names and data types, wouldn't it? The schemas would be applied every subsequent query.
My opinion here is that the goal should be to get the cleanest data to the user as possible without the user having to rely on CASTs and other complicating factors.
Let's drop the CASTs, those aren't fun. So we're left with different ways a user can specify column names and types.
CREATE SCHEMA
against a directory.Any one requires some effort, any one gets you to select *
returning nice results (disclaimer: is this claim I'm making actually true?) which is super valuable. So shouldn't we avoid the quirky 3 and commit to 1 and 2 consistently wherever we can?
This pull request introduces 2 alerts when merging f9e96fe into 42e7b77 - view on LGTM.com
new alerts:
This pull request introduces 2 alerts when merging 428a512 into 14d96d1 - view on LGTM.com
new alerts:
This pull request introduces 2 alerts when merging 428a2dd into 17f3654 - view on LGTM.com
new alerts:
This pull request introduces 2 alerts when merging 881d465 into 38d0c1d - view on LGTM.com
new alerts:
This pull request introduces 2 alerts when merging 56d8f6e into 38d0c1d - view on LGTM.com
new alerts:
@MFoss19 @estherbuchwalter I added some review comments. Please make sure the unit tests pass and also that there are no code style violations before pushing to github.
78 | throw UserException | ||
79 | .dataReadError(e) | ||
80 | .message("Failed to open input file: {}", split.getPath().toString()) | ||
81 | .addContext(errorContext) |
You can remove this second line. Also, please add e.getMessage()
to the message line.
I like that second line (if you mean the message
call). Another solution is to change message()
to addContext()
. This way, we preserve the message from the actual error, and add context to explain the source of the error. Then, as Charles suggested, we don't need the addContext(e.getMessage())
bit.
88 | |||
89 | @Override | ||
90 | public boolean next() { // Use loader to read data from file to turn into Drill rows | ||
91 | String line; |
This line should be in the open()
method.
104 | throw UserException | ||
105 | .dataReadError(e) | ||
106 | .message("Failed to read input file: {}", split.getPath().toString()) | ||
107 | .addContext(errorContext) |
For the error message, you don't need to have multiple addContext()
calls. The main thing is to pass the errorContext
. I would add the e.getMessage()
to the message()
call.
Here and elsewhere.
See explanation above. Ideally:
throw UserException
.dataReadError(e)
.addContext("Failed to read input file: {}", split.getPath().toString())
.addContext(errorContext)
.addContext("Line Number", lineNum)
.build(logger);
Thanks for adding the line number: nice touch.
91 | String line; | ||
92 | RowSetLoader writer = loader.writer(); | ||
93 | |||
94 | try { |
Why not include this in the loop?
108 | .addContext(e.getMessage()) | ||
109 | .addContext("Line Number", lineNum) | ||
110 | .build(logger); | ||
111 | } |
The next()
method needs some work. Really this should be called nextBatch()
as the next method returns true
when there is more data, to read, false
if not.
@Override
public boolean next() {
while (!rowWriter.isFull()) {
if (!processNextLine()) {
return false;
}
}
return true;
}
This method will iterate through the batch of data, and when the rowWriter
is full, (IE the batch is full) it will stop reading, BUT the method will return true
because there is more data to read. The limit is pushed down in the processNextLine()
method.
71 | errorContext = negotiator.parentErrorContext(); | ||
72 | lineNum = 0; | ||
73 | try { | ||
74 | fsStream = negotiator.fileSystem().openPossiblyCompressedStream(split.getPath()); |
Here's where you can check to see whether the user provided a schema or not. You could do something like this:
if (negotiator.hasProvidedSchema()) {
TupleMetadata providedSchema = negotiator.providedSchema();
// Build column writer array
negotiator.tableSchema(finalSchema, true);
} else {
negotiator.tableSchema(buildSchema(), true);
}
This is done internally in V2. V2 does it that way because it became clear that there is no reason for every reader to have to know to do this same pattern.
114 | |||
115 | @Override | ||
116 | public void close() { | ||
117 | if (fsStream != null){ |
This line should be out of the if statement.
The reason it an be out of the if
statement is that the method itself handles null values. Otherwise, the code would be fine as it is if it called close()
directly.
64 | return EasyFormatConfig.builder() | ||
65 | .readable(true) | ||
66 | .writable(false) | ||
67 | .blockSplittable(false) // Change to true |
Change to true.
But only if the additional work described above is done.
96 | while (!writer.isFull() && line != null) { | ||
97 | writer.start(); | ||
98 | parseLine(line, writer); | ||
99 | writer.save(); |
You can lose a line here: read the next line, but the writer is full, so we discard it. As @cgivre suggests:
while (!writer.isFull()) {
String line = reader.readLine();
if (line == null) {
break;
}
writer.start();
parseLine(line, writer);
writer.save();
}
70 | split = negotiator.split(); | ||
71 | errorContext = negotiator.parentErrorContext(); | ||
72 | lineNum = 0; | ||
73 | try { |
Fixed-width files seem perfect for HDFS splits.
So there is a trick here for old-school HDFS systems (if anyone still runs them.) A large file will be split across HDFS nodes, often at 256MB boundaries. A reader that supports "splits" has to handle the fact that node 1 will read the first 256 MB, node 2 the next 256 MB, and so on. Standard HDFS stuff.
This means the reader has to accept the offset and scan for the first record separator after that start point. If the fixed-width records are newline-terminated, that means finding the next newline.
Also, the reader has to stop after it finds the record terminator after its assigned split. (On HDFS, that means part of the last record will be read from a remote HDFS node.)
Further, it means that line numbers, as counted here, are relative: they are from the start of the current split (since node 2, reading the second split, doesn't know the number of records in the first split.)
The CSV reader handles this busy-work, but it a way that is a bit hard to follow as an example, sadly.
30 | |||
31 | @JsonTypeName("fixedwidthReaderFieldDescription") | ||
32 | @JsonInclude(JsonInclude.Include.NON_DEFAULT) | ||
33 | public class FixedwidthFieldConfig implements Comparable<FixedwidthFieldConfig> { |
Just a high-level design note: we now have multiple plugin configs that ask the user to use ad-hoc formats for specifying a per-file schema. This is not how storage plugins were meant to be used, but it is all we have.
Over time, we need a better solution. The provided-schema is a start: it provides a single, well-defines syntax for schema. But, it is a bit limited in handling per-format specific, such as width. (There are extension properties for this kind of information, but that's a bit fiddly.)
We need a way to provide the schema of a file simply as a file the user creates that sits along side the data file. Something like a schema.json
file if all files in a directory have the same schema, or a foo.json
if foo.txt
has a distinct schema.
Support for such a file should be added to the V2 mechanism. (Just use the new format in place of the provided schema.)
48 | @JsonCreator | ||
49 | public FixedwidthFieldConfig(@JsonProperty("name") String name, | ||
50 | @JsonProperty("index") int index, | ||
51 | @JsonProperty("width") int width, |
This makes the user do the math. Given indexes, we can compute the width. Given widths, we can compute the index. The one missing piece might the the width of the field separator (if any).
162 | } | ||
163 | } | ||
164 | |||
165 | @JsonIgnore |
No need for this tag: Jackson doesn't know what to do with this method anyway.
57 | Collections.sort(fields); | ||
58 | this.fields = fields; | ||
59 | |||
60 | validateFieldInput(); |
This is just a bit dangerous. We validate on deserialize. This seems like a great idea: we do the deserialize in the UI when the user saves the config. But, we also do it on system start. If the deserialize fails there, Drill won't start and it takes a long time to figure out why.
We don't have a good answer for config validation. I'd suggest adding a validate()
method that we call:
But, allow an invalid config to be stored. Otherwise, unless everything is perfect, nothing can be saved. And, JSON, in a text editor, is a horrible way to write complex config such as this one.
192 | if (!Pattern.matches("[a-zA-Z]\\w*", name)) { | ||
193 | throw UserException | ||
194 | .validationError() | ||
195 | .message("Invalid input: " + name) |
This message will (I hope) be shown to the poor user trying to get a config right in the Drill web UI. Can we be more specific? Such as, "Name is not valid. Only letters allowed."
But, then, why don't we allow numbers or underscores? "Field_3"? If we allowed that, you could use the Java method. Or, extend the pattern.
Also, it is cleaner to use name = name.strip()
to remove both leading and trailing whitespace so we don't leave whitespace in the name. Otherwise, all users have to know to do their own strip()
call.
The regex here says that the first character must be a letter, and the rest (\w*) must be alphabetical, numerical, or the underscore.
135 | .addNullable("Letter", TypeProtos.MinorType.VARCHAR) | ||
136 | .buildSchema(); | ||
137 | RowSet expected = new RowSetBuilder(client.allocator(), expectedSchema) | ||
138 | .addRow(567, Instant.parse("2021-02-10T15:30:27.00Z"), LocalDate.parse("2021-02-10"), "test") |
Generally, for something like this, you can use a LIMIT 2
and only check the first row or two. The code is not going to change column order after the 10th row!
9 | 77.77 yzzz 777 06-05-7777 05:11:11 06-05-7777T05:11:11.11Z | ||
10 | 88.88 aabb 888 07-06-8888 06:22:22 07-07-8888T06:22:22.22Z | ||
11 | 88.88 aabb 888 07-06-8888 06:22:22 07-07-8888T06:22:22.22Z | ||
12 |
Is the blank line intentional? Kind of hard to document data files. You could allow comments (lines that start with #
, say, but there is always someone who has valid data that starts with #
.) Another idea would be to add a "explanation.txt" file that explains the purpose of each of the data files.
Also, I'd hoped to see all manner of invalid files:
Such files will ensure that the error handling works and will raise that perpetual question: if we're a hundred million lines into a fixed-width file, and we hit an error, should we ignore that line and move to the next, or should we fail the query?
It is intentional to test what would happen if there was a blank line in a user input file. Good idea to add the explanation.txt file and additional invalid files.
Dear PR author and reviewers.
This is a generic message to say that we would like to merge this PR in time for the 1.20 release. Currently we're targeting a master branch freeze date of 2021-12-10 (10 Dec). Please strive to complete development and review by this time, or indicate that the PR will need more time (and how much).
Thank you.
Hi @paul-rogers. We're in the throes of trying to convert this plugin to use EVF v2 / scan.v3. This will be the first instance of this kind in the Drill code base, apart from a very simple mock plugin which supports unit tests (CompliantTextBatchReader remains based on EVF v1, from what I can see).
Something that's confusing me is that the EasyFormatPlugin base class is coded against the ManagedReader interface from EVF v1. So I cannot see that we can both derive from EasyFormatPlugin, and also implement ManagedReader from EVF v2. Am I missing something here?
Thanks, James
@jnturton , thanks for pushing the EVF V2 stuff forward! The EasyFormatPlugin
should contain "shims" for the original format, for EVF1 and for EVF2. Given that you said you can't find it, and that the CSV reader is still based on V1, I wonder if there is some branch that never got pushed a PR? I'll do some research to determine what's what.
@jnturton, turns out the required changes are sitting in a branch in my private repo, csv
, that somehow never not converted to a PR. I'll see if I can merge that stuff into a PR.
PR 2419 has the EVF V2 adapter for the Easy Format Plugin. I suggest that you use that code, and follow the example there, to add EVF V2 support here.
This pull request introduces 1 alert when merging bf6a16c into 4e97f5c - view on LGTM.com
new alerts:
Login to write a write a comment.
DRILL-7978: Fixed Width Format Plugin
Description
Developing format plugin to parse fixed width files.
Fixed Width Text File Definition: https://www.oracle.com/webfolder/technetwork/data-quality/edqhelp/Content/introduction/getting_started/configuring_fixed_width_text_file_formats.htm
Documentation
Users can now create a format configuration to parse fixed width files.
Testing
Unit tests added. More to come.