
2.4.0 Breaking changes This PR switches the default TFLite backend from tf_converter to flatbuffer_direct and aligns the surrounding API, CLI, tests, helper scripts, and docs with that behavior. What changed changed the default tflite_backend in onnx2tf.convert() and the CLI to flatbuffer_direct kept tf_converter available as an explicit compatibility path updated helper tooling and README examples to avoid relying on the old implicit SavedModel behavior clarified documentation so SavedModel-related flows now require explicit direct-export flags or an explicit tf_converter selection added regression coverage for the default-backend path without TensorFlow updated the migration guide to describe flatbuffer_direct as the current default renamed the README supported-layers heading to make it clear that it refers to tf_converter coverage bumped the package version/lockfile for the 2.4.0 change Why this improves the feature This makes the faster direct path the out-of-the-box experience, reduces accidental dependency on TensorFlow-backed conversion, and makes the remaining legacy path explicit. It also removes ambiguity in docs and tests around which backend is responsible for SavedModel generation. Validation pytest -q tests/test_optional_tensorflow.py pytest -q tests/test_tflite2sm_phase1.py -k "flatbuffer_direct_output_saved_model_validation or tflite_direct_input_validation or tflite_direct_input_new_conflict_validation or tflite_direct_input_rejects_mixed_onnx_and_tflite_input" python tests/test_model_convert.py --help
