NVIDIA cuOpt system for real-time intermodal transport optimization
⚠️ This project is under NDA with Move Intermodal. Specific company data, routes, and client information cannot be publicly shared.
🔄 This project is currently in the refinement phase (Week 13-14). The information below describes the current status and remaining work.
An 18-week academic research project for Move Intermodal developing a proof of concept for a GPU-accelerated optimization system using NVIDIA cuOpt for automated truck planning. The system supports planners in optimizing the current manual planning process by matching orders with trucks based on formally documented business rules. The data pipeline is integrated with Snowflake and processes orders, fleet status, and driver availability to generate optimal routes with minimal empty kilometers and maximum fleet productivity. The end result is an interactive dashboard where planners can visualize, evaluate, and manually adjust optimized truck schedules as needed.
The current manual planning process results in suboptimal routes, excessive empty kilometers, and inefficient truck capacity utilization. Planners must manually match trucks to jobs, calculate driver start times, verify delivery windows, and use geographical knowledge for container reload operations. The core problem is an NP-hard multi-constraint optimization challenge where all business rules (ADR certification, time windows, container availability, driver preferences) must be balanced simultaneously. The biggest challenge is translating partially undocumented planning rules into an automated model that reaches optimal solutions within reasonable time.
The project started in September 2025 and runs until January 2026 (18 weeks). Infrastructure is fully set up with GPU compute environment and Snowflake connectivity. The ETL pipeline processes orders, trucks, and driver data with data quality checks. Currently (week 13-14) the project is in the refinement phase with focus on fine-tuning cost functions, constraint weights, and algorithm parameters. The first MVP (Working Planning Tool) was delivered on 28/11/2025, the fully optimized system will be completed on 22/01/2026. Expected impact: 15-20% reduction in kilometers driven, cost savings per kilometer through optimized routes, and reduction of planning time from 2-3 hours to <5 minutes.
Benefits of GPU acceleration for route optimization:
The system processes three main constraint categories that were implemented incrementally (week 7-12). Each constraint has hard variants (must comply) and soft preferences (optimization goals). Implementation followed a phased approach: first order-specific restrictions (week 8), then truck and driver requirements (week 8), followed by full optimization (week 9-10), and finally complex container restrictions (week 11-12):
Combining all these constraints in one optimization model is an NP-hard problem. Traditional solvers use branch-and-bound or constraint programming, but these scale poorly with high complexity. GPU acceleration enables exploring much more solution space through parallel search in the same timeframe.
The project is executed by a 5-person academic research team following the CRISP-ML methodology:
The iterative development approach combines weekly sprints with regular feedback from Move Intermodal planners. Each Function Plan step was documented and validated before moving to the next phase.
The system follows a phased pipeline based on the Function Plan (week 7-14):
load_orders(date) and load_trucks(date) functions retrieve real-time data from Snowflake with all relevant restrictions per phase.constraints_validation(order, truck) creates a binary matrix indicating whether a truck can execute an order (1=yes, 0=no).cost_function(order, truck) calculates costs of truck-order assignment based on distance, time, and fuel consumption.create_planning(trucks, orders) uses cuOpt API for parallel route search on GPU cores with all constraints.generate_schedule(trucks) and save_schedule(schedule, date) create final schedules for drivers.The system integrates with Snowflake for real-time data access. Each morning the pipeline is triggered for next-day planning. The data pipeline retrieves the following:
The NVIDIA cuOpt API uses parallel computing on GPUs. Where a traditional CPU solver must sequentially evaluate thousands of route combinations, the GPU can do this in parallel on thousands of cores simultaneously. This reduces solution time from hours to less than 5 minutes, even with complex multi-constraint optimization.
load_orders(), load_trucks(), create_planning() functions for basic truck-order matching on one specific day.constraints_validation() and cost_function(). cuOpt optimization with all restrictions except containers.