[SPARK-50614][SQL] Add Variant shredding support for Parquet
### What changes were proposed in this pull request?
Adds support for shredding in the Parquet writer code. Currently, the only way to enable shredding is through a SQLConf that provides the schema to use for shredding. This doesn't make sense as a user API, and is added only for testing. The exact API for Spark to determine a shredding schema is still TBD, but likely candidates are to infer it at the task level by inspecting the first few rows of data, or add an API to specify the schema for a given column. Either way, the code in this PR would basically be unchanged, it would just use a different mechanism to provide the schema.
### Why are the changes needed?
Needed for Variant shredding support.
### Does this PR introduce _any_ user-facing change?
No, the feature is new in Spark 4.0, and is currently disabled, and only usable as a test feature.
### How was this patch tested?
Added a unit test suite.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes #49234 from cashmand/SPARK-50614.
Authored-by: cashmand <david.cashman@databricks.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>