troublesome joints – photogrammetry camera positions when aligning pointclouds in cloudcompare.

Align photogrammetry pointclouds (i.e. two “chunks” halves of an object) using CloudCompare and transform cameras to preserve relationship to object (with metashape).

The better method

export 2(+) dense point clouds from metashape.

open in CloudCompare

manual transform/fine align two clouds (note, can be more complicated than pressing a button at times. Reading CloudCompare manual and learning the ropes is well worth it)

copy the transforms in the console log.

Reformat these so that metashape will understand them and add to python file and run in metashape or paste into the python console.

Changes to transform matrix are:

remove timestamp at the beginning
swap spaces for ,
wrap each line with []

Important: Order of items in your formula matters when multiplying matrixes.

Example code:

# the updated transform matrix from CloudCompare
# turned into the appropriate variable type so the math function works

m1 = Metashape.Matrix([[0.173301845789, -0.885858654976, 0.430372953415, -6.250498294830],[0.967402338982,0.071181990206,-0.243034482002,2.022546291351],
[0.184659391642,0.458462178707,0.869317770004,-7.621248245239],
[0.000000000000,0.000000000000,0.000000000000,1.000000000000]])

# the original transform matrix of your object in Metashape

mOrig = chunk.transform.matrix

# define your object (also required when using python console)

doc=Metashape.app.document
chunk=doc.chunk

# set the chunk transform to the product of the two matrixes
chunk.transform.matrix= m1*mOrig

It’s common to have multiple transforms when aligning by hand/with ICP, but you can link them altogether in the code. Often you’ll only do this the once, but it’s helpful to keep stored for future reference, especially if you’re archiving your steps for the future.

*update.

A friend recently pointed out somethign I’d never noticed – the side panel of CloudCompare will show you the sum translation since opening of a file, so you only need to copy the one transform after all your adjustments, rather than each one in the console as you work.

The other method

export 2(+) dense point clouds from metashape.

open in CloudCompare

manual transform/fine align two clouds (note, can be more complicated than pressing a button at times. Reading CloudCompare manual and learning the ropes is well worth it)

save pointcloud file (keep cloudcompare open)

export cameras as .csv from reference panel in metashape

open in same CloudCompare window. (you will need to match the separator value, and start row till it looks friendly. Keep yaw roll etc as SF-scalar fields- helps keep your data together)

copy transform (from the cloud alignment) from log panel

apply to camera csv file. If you needed multiple transforms to get your match, you need to repeat each one in turn.

save camera data as new asci text file with settings of your choice but  PTS order works.

this file will have different/no headings, and likely loose your labels/camera names – copy new coordinates into original file exported from metashape (depending on your preference a text editor or spreadsheet program might be best option.

spreadsheet free column copy-paste for Mac users – an easy way is to use TextEdit, hold down the ‘option’ key (⌥) click and drag a box around the values you need, do the same in the original file and paste over.

The intro (why this post)

Sometimes combining 2-part of multipart photogrammetry to get a whole object can be a real pain.

My preferred method for joining two halves of an object are to use masks, and either auto-chunk alignment, or realignment as a single combined chunk with masks on key points. This has a processing time overhead of course, but in my mind at least, if you’ve got good coverage, fixed focal distance, and a (even partially) calibrated lens it should (i think) result in a more accurate geometry all round. However, this is method is at the mercy of near perfect lighting and can come undone for very small objects that sometimes need a helping in hand in the alignment. Then of course enough overlap, which results in data redundancy and bloat blah blah.

Most instructions online tell you to align models using manually placed markers on visible features. Which is true a lot of the time, but doesn’t always work as you like. Either depth of field or surface characteristics throws out the auto-pinning, or maybe a mistake in the capture means there’s less than ideal overlap (something that happens more than I’d like to say). To get best results you really should go through each point and check they are correct, repositioning.

Frankly there are better things to do in life.

Cloud Compare is super at aligning pointclouds or meshes based on geometry and can get a much better result with a small bit of overlap than faffing about looking for matching spots and smudges. But, ideally you want to align your pointclouds before you build your mesh and building your texture.

Importing your nicely aligned dense cloud will be out of place, and there’s not (as far as I know, they keep changing things) of performing this transform in metashape.  That’s where CloudCompare comes to the rescue yet again.

It’s a relative painless procedure to export the coordinates of cameras, transform them, and re-import them – shifting not only the cameras, but the tiepoints so you can continue with your normal processing steps.