Can Zenmuse L3 Combine LiDAR Efficiency with Photogrammetry-Grade Imaging in a Single Flight?

Introduction: The Idea of a Single-Flight Workflow

In drone mapping projects, LiDAR and photogrammetry have traditionally been treated as two separate workflows. A LiDAR mission focuses on capturing accurate 3D geometry, while a photogrammetry mission is designed to provide high image quality for orthomosaic generation. In many cases, this means flying the same area twice: once for LiDAR, and once for imagery.

The introduction of the DJI Zenmuse L3 raises a practical question: can the integrated LiDAR sensor and dual 100 MP RGB cameras capture all needed data within a single flight and combine these two workflows?

The key issue is not whether this is technically possible—it is—but whether a single-flight workflow can consistently deliver results that meet the requirements of both LiDAR and photogrammetry outputs.

Why LiDAR and Photogrammetry Are Usually Separated

LiDAR and photogrammetry aren’t just different in how they capture data; they also have distinct approaches to planning flight missions. Each method has its own requirements, which can often clash with one another.

Photogrammetry is primarily used for generating orthomosaics and highly detailed, textured 3D surface models. Its main strength lies in producing visually rich, photorealistic reconstructions based on overlapping imagery. This makes it especially effective for applications such as visual interpretation, mapping, cultural heritage documentation, and any task where surface texture and appearance are critical. However, its performance strongly depends on surface visibility and image quality. As a result, photogrammetry is less reliable in dense vegetation, where ground reconstruction becomes difficult, and in cases involving thin linear structures such as power lines.

LiDAR, on the other hand, is a geometry-driven sensing technology that provides highly accurate and dense point cloud data. One of its key advantages is its partial ability to penetrate vegetation, allowing extraction of ground information and generation of high-quality Digital Elevation Models (DEMs), even in complex forested environments. This makes LiDAR particularly strong for terrain analysis, infrastructure monitoring, and applications such as power line inspection, where geometric accuracy and completeness are critical.

Parameter

LiDAR Survey

Photogrammetry Survey

Flight altitude

Higher

Lower

Image overlap

Not critical

Critical

Flight speed

Higher allowed

Lower preferred

Lighting conditions

Not critical

Critical

Primary output

Point cloud

Orthomosaic / 3D mesh

In summary, photogrammetry is generally optimized for visual and textural reconstruction of surfaces, while LiDAR is better suited for precise geometric modeling and terrain representation under challenging environmental conditions. In modern geospatial workflows, these two technologies are typically integrated to leverage the complementary strengths of both.

DJI Zenmuse L3: Integrated LiDAR and RGB Acquisition

The DJI Zenmuse L3 is a purpose-built high-precision integrated mapping payload system that integrates long-range LiDAR, dual high-res RGB cameras, and a next-generation high-precision POS system.

Unlike first-generation systems that optimize LiDAR or imaging quality, the L3 is specifically designed for the simultaneous generation of high-accuracy, tightly-coupled imaging and LiDAR data.

Long-Range LiDAR System

At the heart of this system is a 1535 nm LiDAR module that can detect objects from distances of up to 950 meters, even on targets with low reflectivity (around 10%).

Key LiDAR characteristics include:

  • Pulse rate up to 2 million pulses per second
  • Multiple returns (up to 16 returns per pulse)
  • Beam divergence of approximately 0.25 mrad
  • Adjustable pulse emission frequency for different mission profiles

This technology allows both detailed topographical reconstructions of difficult environments, as well as efficient long range data collection in unsupported environments. The multi-return functionality for instance will enable detailed ground surface recovery in deeply-canopied areas.

Scanning Modes for Mission Adaptation

L3 supports multiple scanning strategies, allowing operators to optimize data acquisition depending on terrain complexity and required output quality.

Mode

Characteristics

Best Use Case

Linear

Uniform point distribution

Standard terrain mapping

Star-Shaped

Multi-angle coverage

Forests, urban environments

Non-Repetitive

High angular variability

Powerlines, complex structures

This flexibility allows a single sensor to adapt across significantly different survey environments without hardware changes.

Dual 100MP RGB Imaging System

The system integrates dual 100 MP 4/3 CMOS RGB cameras with mechanical shutters, designed not only for colorization but also for full photogrammetric reconstruction.

Key imaging parameters:

  • Combined field of view: approximately 107°
  • High overlap efficiency due to wide coverage per flight line
  • Effective GSD down to ~3 cm at 300 m altitude
  • Pixel binning mode improving low-light performance

Even at higher altitudes, the system maintains sufficient image density for orthophoto generation and DSM extraction.

Accuracy and POS Integration

One of the system’s key innovations is the extremely accurate POS module, which maintains metric accuracy between the LiDAR and the RGB data.

Reported performance includes:

  • Heading accuracy: 0.02°
  • Pitch accuracy: 0.01° (post-processed, 1σ)
  • LiDAR ranging repeatability: ~5 mm at 150 m
  • Vertical accuracy

Time synchronization at the microsecond level makes sure each LiDAR pulse and camera image is tagged with the correct time so no manual alignment would be required in post processing.

Operational Efficiency and Coverage

The system is optimized for large-area mapping efficiency rather than isolated high-detail capture.

Key operational metrics:

  • Up to 10 km² per flight (depending on altitude)
  • Up to 100 km² per day using DJI Matrice 400
  • Reduced number of flight lines due to wide FOV cameras

This makes L3 particularly suitable for:

  • infrastructure corridors
  • mining and earthworks
  • forestry and biomass estimation
  • coastal and terrain monitoring

What Zenmuse L3 Changes

The DJI Zenmuse L3 marks a significant step forward in integrated data capture. It combines a LiDAR sensor with dual high-resolution RGB cameras, which collect point cloud data and high-definition images at the same time.

The L3 improves on prior payloads in both LiDAR performance and image quality; the dual 100 MP sensors are a step in that direction towards photogrammetry quality imaging rather than simple colorization.

The concept behind the L3 is to take everything it needs from a single flight, so it will not need to do a second run. However, the effectiveness of this approach depends on how well one flight profile can satisfy two different sets of requirements.

Can L3 Replace Photogrammetry for Orthomosaics?

The main limitation of a single-flight approach becomes clear when considering orthomosaic generation.

High-quality orthophotos require:

  • Consistent nadir imagery
  • High forward and side overlap
  • Stable geometry across the dataset

These requirements are usually met through well-planned photogrammetry missions. However, in a flight focused on LiDAR, these factors often take a backseat.

While the DJI Zenmuse L3 captures high-resolution images, the flight profile may not provide the overlap or geometry required for precise orthomosaic reconstruction. As a result, the quality of orthophotos can vary depending on the mission design.

Aspect

L3 Single Flight

Dedicated Photogrammetry Flight

Efficiency

High

Lower

Image overlap

Limited

Optimized

Orthomosaic quality

Acceptable / variable

High / consistent

Control over geometry

Limited

Full

In practice, L3 imagery can be sufficient for general mapping and visualization. For engineering-grade orthophotos, a dedicated photogrammetry flight often remains necessary.

Where Single-Flight Workflow Works Well

Despite these limitations, there are many scenarios where a combined LiDAR and imaging workflow is both efficient and sufficient.

Typical use cases include:

  • Large-area mapping where speed is a priority
  • Corridor surveys such as roads, railways, or pipelines
  • Preliminary site assessments
  • Projects where LiDAR is the primary deliverable
  • Situations with limited access or flight windows

In these cases, the ability to capture both geometry and imagery in a single mission reduces operational complexity and overall project time.

Where Separate Flights Are Still Required

There are also scenarios where separate LiDAR and photogrammetry missions remain necessary.

These include:

  • High-precision orthomosaic production
  • Engineering and cadastral surveys
  • Dense urban environments requiring detailed facade reconstruction
  • Projects with strict regulatory or accuracy requirements

In these situations, the flexibility to optimize each flight independently outweighs the efficiency of a combined approach.

LiDAR + RGB Integration: Practical Benefits

Even when a single flight does not fully replace photogrammetry, the integration of LiDAR and RGB data provides clear advantages.

The imagery captured by the DJI Zenmuse L3 can be used to:

  • Colorize point clouds
  • Improve interpretation of terrain and structures
  • Support inspection and analysis workflows

This reduces the need for additional data collection in many cases and enhances the usability of LiDAR outputs.

Workflow

Field Time

Data Completeness

Flexibility

Separate flights

Higher

Maximum

High

Single flight (L3)

Lower

High

Medium

The key benefit is not replacing photogrammetry entirely, but reducing how often it is required

Processing Considerations

After data capture, processing workflows still reflect the differences between LiDAR and photogrammetry.

In DJI Terra, LiDAR data and imagery can be processed within a unified environment, particularly for point cloud generation and visualization. However, generating high-quality orthomosaics still requires photogrammetry-specific processing steps.

Software such as Pix4Dmapper remains optimized for image-based reconstruction, offering greater control over overlap, calibration, and output quality.  A more detailed breakdown of its role in professional mapping workflows is covered in one of our previous articles Pix4D Software Ecosystem: Professional Tools for Drone Mapping and Geospatial Analysis.

This means that even when data is captured in a single flight, processing workflows may still diverge depending on the required outputs and the downstream system where the data will be used.

Conclusion

The DJI Zenmuse L3 represents a significant step toward integrated data capture. It reduces the need for separate missions and allows both LiDAR and imagery to be collected efficiently in a single flight.

However, it does not fully replace dedicated photogrammetry workflows in all cases.

For projects where efficiency and coverage are the priority, a single-flight approach is often sufficient. For projects requiring consistent, high-precision orthophotos, separate photogrammetry missions remain the more reliable option.

In practice, the L3 is best understood not as a replacement, but as a tool that expands flexibility. It allows survey teams to decide when a single flight is enough—and when additional data capture is justified.