image : fratillari

Miscellany

thoughts, ponderings, experiments & recent things

 

Align photogrammetry pointclouds (i.e. two “chunks” halves of an object) using CloudCompare and transform cameras to preserve relationship to object (with metashape).

The better method

export 2(+) dense point clouds from metashape.

open in CloudCompare

manual transform/fine align two clouds (note, can be more complicated than pressing a button at times. Reading CloudCompare manual and learning the ropes is well worth it)

copy the transforms in the console log.

Reformat these so that metashape will understand them and add to python file and run in metashape or paste into the python console.

Changes to transform matrix are:

remove timestamp at the beginning
swap spaces for ,
wrap each line with []

Important: Order of items in your formula matters when multiplying matrixes.

Example code:

# the updated transform matrix from CloudCompare
# turned into the appropriate variable type so the math function works

m1 = Metashape.Matrix([[0.173301845789, -0.885858654976, 0.430372953415, -6.250498294830],[0.967402338982,0.071181990206,-0.243034482002,2.022546291351],
[0.184659391642,0.458462178707,0.869317770004,-7.621248245239],
[0.000000000000,0.000000000000,0.000000000000,1.000000000000]])

# the original transform matrix of your object in Metashape

mOrig = chunk.transform.matrix

# define your object (also required when using python console)

doc=Metashape.app.document
chunk=doc.chunk

# set the chunk transform to the product of the two matrixes
chunk.transform.matrix= m1*mOrig

It’s common to have multiple transforms when aligning by hand/with ICP, but you can link them altogether in the code. Often you’ll only do this the once, but it’s helpful to keep stored for future reference, especially if you’re archiving your steps for the future.

*update.

A friend recently pointed out somethign I’d never noticed – the side panel of CloudCompare will show you the sum translation since opening of a file, so you only need to copy the one transform after all your adjustments, rather than each one in the console as you work.

The other method

export 2(+) dense point clouds from metashape.

open in CloudCompare

manual transform/fine align two clouds (note, can be more complicated than pressing a button at times. Reading CloudCompare manual and learning the ropes is well worth it)

save pointcloud file (keep cloudcompare open)

export cameras as .csv from reference panel in metashape

open in same CloudCompare window. (you will need to match the separator value, and start row till it looks friendly. Keep yaw roll etc as SF-scalar fields- helps keep your data together)

copy transform (from the cloud alignment) from log panel

apply to camera csv file. If you needed multiple transforms to get your match, you need to repeat each one in turn.

save camera data as new asci text file with settings of your choice but  PTS order works.

this file will have different/no headings, and likely loose your labels/camera names – copy new coordinates into original file exported from metashape (depending on your preference a text editor or spreadsheet program might be best option.

spreadsheet free column copy-paste for Mac users – an easy way is to use TextEdit, hold down the ‘option’ key (⌥) click and drag a box around the values you need, do the same in the original file and paste over.

The intro (why this post)

Sometimes combining 2-part of multipart photogrammetry to get a whole object can be a real pain.

My preferred method for joining two halves of an object are to use masks, and either auto-chunk alignment, or realignment as a single combined chunk with masks on key points. This has a processing time overhead of course, but in my mind at least, if you’ve got good coverage, fixed focal distance, and a (even partially) calibrated lens it should (i think) result in a more accurate geometry all round. However, this is method is at the mercy of near perfect lighting and can come undone for very small objects that sometimes need a helping in hand in the alignment. Then of course enough overlap, which results in data redundancy and bloat blah blah.

Most instructions online tell you to align models using manually placed markers on visible features. Which is true a lot of the time, but doesn’t always work as you like. Either depth of field or surface characteristics throws out the auto-pinning, or maybe a mistake in the capture means there’s less than ideal overlap (something that happens more than I’d like to say). To get best results you really should go through each point and check they are correct, repositioning.

Frankly there are better things to do in life.

Cloud Compare is super at aligning pointclouds or meshes based on geometry and can get a much better result with a small bit of overlap than faffing about looking for matching spots and smudges. But, ideally you want to align your pointclouds before you build your mesh and building your texture.

Importing your nicely aligned dense cloud will be out of place, and there’s not (as far as I know, they keep changing things) of performing this transform in metashape.  That’s where CloudCompare comes to the rescue yet again.

It’s a relative painless procedure to export the coordinates of cameras, transform them, and re-import them – shifting not only the cameras, but the tiepoints so you can continue with your normal processing steps.

First catch your data.

Wrestle it into the processing and editing software of your choice and process and clean your points to taste. When you are happy with this get it into cloud compare.

If you are using real world coordinates, now’s the time to make a copy of the global-shift used on opening to avoid headaches later (select a cloud, Edit -> global shift).

Dice into bite-size portions along the faces of interest (walls, ceilings, floors etc) using the slice tool.

The faffy bit.

This bit needs a bit of prep, and some notes to not get yourself in a mess.

Step 1. choose a piece

Step 2. move the view so you are looking toward the face. You don’t need to be precise.

Step 3. Use the point picking tool to make 2 points at either end of your wall roughly where you want a baseline to be.

Save these points as a text file.

Step 4: open text file in a text editor of your choice, and level the Z. E.g. if point 1 is Z 102.33 and point 2 is Z 103.65, change them both to 102.00.

Then, copy point 2 and paste on the next line and change the Z to something more, such as 107 in this example – this is point 3.

Step 5: open this file in cloud compare and delete the original points

Step 6: use the alignment tool to reposition these 3 points on the flat – you select point 1 (bottom left) first, then point 2, then point 3.

Step 7: copy the transform that is displayed in the console, select the original pointcloud and transform (cmd/ctrl + t) and paste to apply the same transform to the pointcloud

This has got the pointcloud belly up for rasterizing. Next we want to make it make it as QGIS friendly as possible.

I’d suggest doing up to step 7 for all of the walls you want first, as it’s easy to forget the next step later if you’re not careful.

Step 8: To be as grid friendly in our 2D work as we can, use the power of maths and the transform tool to set the y to the z of points 1 and 2. Look at the Y of your repositioned point with the point tool, it will be something a bit random, like 8.2. So in this example, subtract 8.2 from 102, and then transform the cloud and points accordingly. i.e. y= (102-8.2).

Step 9: save these new coordinates, along with the small y transform.

You could do something similar with the X if you want so that your length dimensions start at 0 from point 1, or something else so that all your walls line up nicely. As you like.

Step 10: The global shift again. We want it in local coordinates, cos it don’t make no sense otherwise. Select all your transformed walls, and set the global shift to 0.

Step 11: rasterize your walls.

Step 12: open in QGIS and draw!
Your Y is Z in the real world, your X is just metres from

If you add you XY points, and to QGIS, and label them up with the original coordinates, you now have a way to print your data in 2D with enough information to return it back into 3D should you only have a book to go on.

A sketch of a memory that has haunted me all my life. I’ve never been able to separate whether or not this is something I saw, or imagined, or whether it’s stayed static or grown with with me, but one way or another it is seeded in one of my earliest memories.

With luck such graceful flights of the plastic bags will become a thing of the past…

A tree stump with a plastic bag pulling in the wind

The Lightroom publish services can be great for generating jpg copies of files in structured folders from a less approachable core archive. But moving between computers, hard drives or even just drive letters and it all falls down as the path cannot be easily changed.

Enter some SQL tinkering. Use of https://sqlitebrowser.org is pretty handy.

https://www.lightroomqueen.com/community/threads/how-to-change-export-location-of-publish-service-or-migrate-published-folder.13018/
Provides a brief mention of how this can be achieved in Lightroom 3.4 – edit the AgLibraryPublishCollection and change the remoteCollectionId path to what you need.

Warning ! tinkering inside the LR catalogue is risky, use a copy and tread carefully.

Example SQL code to use:

update AgLibraryPublishedCollection
SET
remoteCollectionId = replace (remoteCollectionId,’oldPath’,’newPath’)
where remoteCollectionId like ‘%oldPath%’

But if this doesn’t work, then create a new empty publishing service with the new target location, and edit the genealogy and parent columns of the AgLibraryPublishCollection to replace the original publish service id with the that of the new one. This will migrate the smart collections and folders into the new publishing service and should work. Example SQL code to use:

update AgLibraryPublishedCollection
SET
genealogy = replace(genealogy, ‘/79269757’, ‘/79979000’)
where genealogy like ‘/79269757%’

and

update AgLibraryPublishedCollection
SET
parent = replace(parent, ‘9269757’, ‘9979000’)
where parent like ‘9269757’

BLK360 scanner. small, portable. pain in the rear. The same can be said of Recap Pro.

If, as is not uncommon, you find yourself without ReCap, but wanting to peak inside a scan to check its the right one you can use the photos from the scanner’s image.

They live in the “sourcedata” folder of the recap project, and are accompanied by a json file giving some info.

Rename the files to .raw, and open in photoshop/fiji/some other capable image editor with appropriate settings and you’ll get something sort of like what you expect that can be tweaked with curves.

Might depend on if it is a HDR file or not, so some tweaking might be needed.

Similarly, photos stored in notes taken in the recap app are just jpg images without the extension. add the extension and they’ll open.

Why Flotation isn’t always the best method for recovering fragile archaeobotanical remains.

A small video as supporting media demonstrating preservation bias in archaeology. A reminder that the techniques we use to recover fragile remains may be the ones that destroy the very material we are seeking to preserve.

close up of a small fragment of twig held in teasers next to a pot of water
closeup pf a twig sinking and disintegrating in a small tub of water
Close up of a small round plant tuber dropping into a small tub of water
Close up of a small round plant tuber dissintegrating into a black cloud of sinking dust in a small tub of water.
Credits to the lead researcher Amaia Arranz Otaegui

It’s not yet cold as it was on this fine walk with friends, but autumn is creeping closer.

Work In Progress. The seasons in Buckwheat.

What it is

A little bit of JavaScript for The sketchfab api to mix/fade between two textures on a 3D model. Open source, if it’s handy, please use it, if you’ve got improvements, please contribute.

Where It is

Available on github, with a preview here.

What it isn’t

It isn’t finished…. It works with 1 material on a model only. It does appear to function on mobile devices, but not as a result of any plan, and it isn’t polished or optimised in anyway so don’t expect it to solve all your needs.

The Outro

The idea for the above goes back years and years and to some of the first objects I tried to 3D capture. Artefacts with fine details (a figurine, not the knife knife, that’s not archaeological at all, just a knife thing) that just don’t do well in the diffuse light of photogrammetry. Some fine features preserve in 3D and can be added back into a web-ready model with a bit of texture shoogling, specular maps and the like. Other features really don’t capture well, but can be integrated into your capture workflow with some patience a tripod and a few techniques and clobber (raking light, RTI, NIR, polarised light etc, this guy’s pages have some nice examples, kit, and a bread recipe too!).

But attempting to recreate ‘the real thing’ in 3D just isn’t always what we’re after. Being able to see all the details is well and good, but observing them and understanding them takes time. Sometimes we want to skip to the end and have someone show us what’s important like in a good old hand-drawing. Having both together and switching between source and Interpretation in 3D space, even better.

This sort of thing is pretty standard fare in 3D editing software and there are options online too. I really like 3DHOP as a 3D web viewer (PoTree is also great of course), the tools, the functions, the ability for local installs, streamed models and pretty darn customisable. It’s great. But Sketchfab is pretty handy too – with everyone’s models in one place for idle discoveries, it’s easy to use, the viewer is getting pretty, and the API provides that bit extra potential over your standard sketchfab view.

It’s also pretty darn popular in heritage circles these days, and it was down to a conversation with fellow photographer Owen Murray that gave this thing some momentum. Switching between textures, that’s pretty doable with examples online and configurators available – and used to great effect for multispectral photography by the guys at Rigsters. But what about mixing between one texture and another more slowly, that must be doable right?

Well, nothing said it isn’t, but nothing leapt out as an example of it already being done. It probably has been done with a top-notch easy plug-and play piece of code lying out there in plain sight, but it wasn’t spotted, so this was made instead.

The main expected potential is for sharing artefact illustrations or epigraphy between researchers, with this in mind, clarity of texture is important. Unfortunately, with great texture size comes quickly diminishing levels of performance, and so a quick switcheroo was implemented to reduce lag during the mixing, and a big yellow button to load in an extra-special res texture when looking at it all close-up like. As mentioned at the start, there’s still more to be done on this, and it was to some extent and excuse for dipping a toe into the waters of the Sketchfab API, but hopefully a toe that someone will find useful.

Cloud Compare is excellent. Especially cloud-to-cloud alignment. Sometimes it’s nice to reflect the repositioning possible in Cloud Compare, in the original metashape file, especially in reconstructed objects. Can be done easily using the Cloud Compare transform matrix and some simple python.

Example below.
italic – delete from Cloud Compare copied transform.
Bold – add.
Save as .py file and open from ‘run script’ in metashape.

doc=Metashape.app.document
chunk=doc.chunk

M = [[[14:45:15] 0.975619614124, -0.219468384981, 0.000000000000, -0.067751213908],
[0.219468384981, 0.975619614124, 0.000000000000, -0.004536916502],
[0.000000000000, 0.000000000000, 1.000000000000, 0.093857482076],
[0.000000000000, 0.000000000000, 0.000000000000, 1.000000000000]]

chunk.transform.matrix = Metashape.Matrix(M)*chunk.transform.matrix

I like food. I really do. I may not look it, but I enjoy eating, cooking (and occasionally washing up). It doesn’t look like it to look at me, the stickman monicker is for the scribbly drawing, but could be attributed to myself. But nevertheless I like food, and especially cakes.

But there is a problem. I’m pathologically incapable of following recipes, and have a habit of wondering off and doing other things. This can work out great if you want a light sour-dough – so long as you don’t mind eating your bread days after you planned.

But this problem, like so many other so called deviations from the norm or ideal character can be a source for good, for discovery, serendipity, probiotics, and most importantly of all, a vehicle for butter.

So here is the latest discovery, and a recipe of sorts.

Full butter cinnamon Buissants/Croissets

A handy travel-ready flat buttery snack, perfect to the smother the cockles on a cold morning and guaranteed to horrify french bakers.

Put around 400 grams of plain flour in a bowl with enough water to make it wet. Lob in a good shake of dry yeast, some salt, cover, and wonder off to do something else. Less than 2 days is recommended.

Catch the beast before it escapes out of the kitchen altogether and add some more flower to make it manageable and kneed it into submission. Let rise again.

In a pan, soften around 1/4 kilo of butter, and add more cinnamon than you think sensible, and enough brown sugar to almost be able to taste it. slightly overheat then spend the next 40 minutes impatiently trying to cool into a spreadable paste.

Form out the dough on a flat surface and spread with butter mixture, fold up, chill and later roll out. Repeat and do a few other no forgotten twists and layering with butter.

Cut into triangles and roll into croissant shapes.

If you’re lucky you’ll have something that could pass as a croissant in dark alley in February. At this point you need to put them near the oven to rise, forget about them while you cook something, only returning the next morning to find they’ve all-but lost the 3rd dimension and bake for a bit.

At the end you’ll be rewarded with a tasty, crunchy thing, not quite croissant, not quite biscuit, but all of the butter, in a 3rd of the volume. A great on-the-go snack.

tune in next to time to see what’s grown in the kitchen next time.

a repost of old work.

Temple of Convenience, a fine hole to fall into.

Manchester.

black and white 360 panorama looking at the entrance to the temple of convenience pub. The entrance is in the centre and Oxford road extends on either side with people and cars motion blurred
Temple of Convenience, Manchester

silhouette of a figure with a spear looking up toward raised foot of a woolly mammoth in a small woodland.
If you’re wondering, no mammoths weren’t quite* that big, but I bet they felt bigger.

I recently had the good fortune to work on material for the Hidden Depths Project

a fascinating research project headed by Penny Spikins at the University of York and funded by the Templeton Foundation looking into the evolution, role and function of human emotions. Take a look at their project website to find out about their academic research here https://hiddendepthsproject.wordpress.com/.

My role was creating support material and imagery for a set of teacher-led lessons and activity packs developed by Taryn Bell (University of York).

I’ll expand this post in coming weeks to discuss the technical and artistic design choices that went into some of the material. For now you can take spin round the website here http://www.hiddendepths.org/index.html

 

very salient and useful tips by Tom Goskar. Take a look.

Top 10 tips on curating 3D digital objects

If wanting to export shape files from cloud compare (useful for tracing features) and use shapefiles in other programs, you will need a .prj file (projection file).

You can get one by saving a shp file from one of the other programs (e.g. QGIS, photoscan/metashape) and copy and rename it to match your cloud compare shapefile.

Make sure it is the correct coordinate system.

if you import a shape file into photoscan/metashape and it’s not visible, but you really think it should be

good chance that it’s missing

a) a prj file

b) z values

if a) see other post, if b) you can see them in ortho/dem view still, and provided they relate to a coordinates system that matches an existing DEM (e.g. traced in GIS from a vertical DEM) you can project into 3D space. (might be messy). Other options include projecting into 3D in GIS or transforming using cloudcompare or CAD (see other post.. possibly)

An aide memoir. Maybe be re-written in time.

Got data in Photoscan? Want to map them in a way that looks beautiful printed in 2D? Want that same data in to live in 3D? Not got CAD?

It can be done with QGIS with a little help from CloudCompare.

Orthoimages and Dems on vertical planes (defined by 3 points) can be digitized in Photoscan or QGIS, and the shape files transformed through cloud compare to end up in QGIS or Photoscan as need.

Which is best depends on preference. I (currently) like QGIS’ symbol flexibility while working. Easier to keep track of entities.

You will need 3 points for your plane, 0,0 , x,0 and 0,y.  You can set these and should be on a right angle.

(more later).

Transform in CloudCompare is simple with point-picking translation command between the drawing pins and the full coordinates.

Use console to copy the transform and apply to shapefile lines as needed.

Save.

 

NOTE: when going from QGIS -> photoscan (a.k.a metashape), a prj file is needed – simply export a random shape from photoscan and rename the .prj file to the one you’re importing.

When going from photoscan -> QGIS, probably need to save as 2D shapefile, but haven’t tried this.

 

Excerpts, test images and bits from the cutting room floor.

an early concept

Many congratulations to my colleagues on their discovery and detailed work on the earliest remains of a bready thing yet to be discovered, and for it being picked up by the news!

https://www.theguardian.com/science/2018/jul/16/archaeologists-find-earliest-evidence-of-bread

 

Almost ready with the prints for the art market.

a table full of prints