Coder Social home page Coder Social logo

Comments (23)

nvkelso avatar nvkelso commented on August 11, 2024

Related: #30

from joerd.

nvkelso avatar nvkelso commented on August 11, 2024

/cc @bcamper @blair1618

from joerd.

nvkelso avatar nvkelso commented on August 11, 2024

We investigated using a GPU-specific format like ETC1, but it turned out support was not widespread enough for general application.

from joerd.

zerebubuth avatar zerebubuth commented on August 11, 2024

Additional note: the issues with ETC1 were that it's not supported on iOS and that it's technically lossy (although perhaps not much in practice, given the distribution of height data). The feeling was that doing the gradient computations on the client would outweigh any benefit from a GPU-optimised texture format.

The 4-channels we discussed were:

n = gradient.normalize # i.e: with magnitude=1.0
def encode(v):
  return int(round(127 * (v + 1.0)) + 1
scale_factor = 8.45
signed_height_exp = int(scale_factor * math.log2(abs(height) + 1))
if height < 0:
  signed_height_exp += 128
[r, g, b, a] = [encode(n.x), encode(n.y), encode(n.z), signed_height_exp]

Now that I've written it down, it seems really complex. Thoughts?

Also, the scale_factor=8.45 makes best use of the bits, but scale_factor=8 would allow us to accurately represent power-of-two integer heights. Not sure whether one or other scheme is "better" in any meaningful way.

from joerd.

matteblair avatar matteblair commented on August 11, 2024

Yeah, assuming "gradient" refers to ∇ of the isosurface function, the normal map encoding looks perfect.

I don't have any strong preference between scale factors of 8 and 8.45, though I would tend to believe that styling won't usually make distinctions based on power-of-two elevations.

It just occurred to me though that the exponential encoding could behave kind of weirdly when interpolated linearly across a texture :\

from joerd.

zerebubuth avatar zerebubuth commented on August 11, 2024

I was thinking that the interpolation would just be for colouring - or were we also planning on doing displacement mapping? If the latter, then linearly interpolating in the exponential scale would produce some odd artefacts.

If the exponential scale is just used for colouring, then we never have to "un-exponential" it, and linear interpolation is fine. (i.e: colour is a look-up table on a, and doesn't involve recovering height).

If we used a linear scale instead, then a = int(h) / 78 + 140 would capture the Challenger Deep - Everest range, but each point would be to the nearest 78m in height, which seems pretty coarse... @nvkelso would that be okay for colour-mapping height ranges?

from joerd.

bcamper avatar bcamper commented on August 11, 2024

The issue is that we can't (I don't believe anyway) tell GL to interpolate
the RGB components but not the 4th channel (without doing something
significantly inefficient like copying the 4th channel to a second texture
that is not interpolated... though I supposed it's possible).

On Thu, Feb 11, 2016 at 11:30 AM, Matt Amos [email protected]
wrote:

I was thinking that the interpolation would just be for colouring - or
were we also planning on doing displacement mapping? If the latter, then
linearly interpolating in the exponential scale would produce some odd
artefacts.

If the exponential scale is just used for colouring, then we never have to
"un-exponential" it, and linear interpolation is fine. (i.e: colour is a
look-up table on a, and doesn't involve recovering height).

If we used a linear scale instead, then a = int(h) / 78 + 140 would
capture the Challenger Deep - Everest range, but each point would be to the
nearest 78m in height, which seems pretty coarse... @nvkelso
https://github.com/nvkelso would that be okay for colour-mapping height
ranges?


Reply to this email directly or view it on GitHub
#31 (comment).

from joerd.

nvkelso avatar nvkelso commented on August 11, 2024

For the bathy (under mean tide sea level):

http://www.naturalearthdata.com/downloads/10m-physical-vectors/10m-bathymetry/

0m
200 m
1,000 m
2,000 m
3,000 m
4,000 m
5,000 m
6,000 m
7,000 m
8,000 m
9,000 m
10,000 m

For land it'd be nice to have something more like 20 or 50 meters? Something that gets nice 100m "index" values.

from joerd.

bcamper avatar bcamper commented on August 11, 2024

Another option is an explicit index table of some kind that maps to the
8-bit values. That has the downside of more involved decoding logic. Well
maybe actually "simpler" logic-wise, but heavier in terms of code/data
requirements if you had to somehow access an index table in a shader...
plus there's still the interpolation issue.

On Thu, Feb 11, 2016 at 1:04 PM, Nathaniel V. KELSO <
[email protected]> wrote:

For the bathy (under mean tide sea level):

http://www.naturalearthdata.com/downloads/10m-physical-vectors/10m-bathymetry/

0m
200 m
1,000 m
2,000 m
3,000 m
4,000 m
5,000 m
6,000 m
7,000 m
8,000 m
9,000 m
10,000 m

For land it'd be nice to have something more like 20 or 50 meters?
Something that gets nice 100m "index" values.


Reply to this email directly or view it on GitHub
#31 (comment).

from joerd.

zerebubuth avatar zerebubuth commented on August 11, 2024

Let's say we have a function for turning a height into a byte for use in the alpha channel fn channel(int16 height) -> uint8, then interpolate that (still in the range 0..255) then map that as a colour with fn colour_for(float a) -> (float, float, float). We don't necessarily have to decode the alpha channel back into a height. From the kinds of mappings which get used (e.g:)

Hypsometric colouring

It looks like a simple 5 or 6 stop colour ramp.

In which case, we can use channel to put more detail in the 0-1km range, for example, than the rest of the range. That would change the values in colour_for, but not necessarily require a complete decoding back to height.

However, the issue will be that if we want to interpolate height linearly, then we need to decode back to height and the only thing that makes sense is a 78m (or 100m if we want nice, round values) interpolation. Quite frankly, it's pretty much useless for detailed work, but the example mapping above shows there's little change between 0-200m.

In summary: We should figure out if we need linear interpolation of heights, i.e: displacement mapping, without pulling down the accurate height data. I don't think we do, and I'll be making some example images to show what each scheme might look like.

from joerd.

bcamper avatar bcamper commented on August 11, 2024

There are two different use cases here:

  • Basic color styling: this is what we need for basic 2d hillshading, and I think that as you say, we only need a small number of values (color ramp of 5-6, maybe max 10 colors?), so a coarse representation of height seems fine. I think we can get away with 78m increments here (?). We want typical hillshading to be achievable and efficient with a single data source, in this case the normal map + 8-bit height channel.
  • Displacement mapping: this requires more height detail, and for that case I think our only reasonable choice is to pull in the separate elevation tiles with the raw height (the 24-bit, 16.8 fixed point format). Since this use case is less common and more advanced, it's reasonable to me that it requires a separate/additional data source.

from joerd.

bcamper avatar bcamper commented on August 11, 2024

Normal map overview for reference: http://docs.unity3d.com/Manual/StandardShaderMaterialParameterNormalMap.html

from joerd.

zerebubuth avatar zerebubuth commented on August 11, 2024

These are just greyscale, but they'll do for now.

Linear interpolation

Linear interpolation for the whole world looks pretty good.

etopo_remap_linear

But when you zoom into a coastal area, such as San Francisco:

etopo_remap_sf_linear

With the contrast stretched, you can see how little detail is available:

etopo_remap_sfeq_linear

Exponential scaling

The world still looks quite good, but we're obviously missing a lot of the detail which was in the bathymetry:

etopo_remap_exp

San Francisco looks pretty good: there's a lot of detail around the shoreline.

etopo_remap_sf_exp

Stretching the contrast doesn't really do much - it's using almost all the 8-bit range already in such a small coastal area. This is because we've heavily weighted away from mountainous or deep-sea areas.

etopo_remap_sfeq_exp

Table-based approach

With a table of:

  • -11,000m to -1,000m in 1000m intervals,
  • -100m, -50m, -20m, -10m, -1m levels,
  • 0m to 3000m in 20m intervals,
  • 3000m to 6000m in 50m intervals,
  • 6000m to 8800m in 100m intervals (just missing out the peak of Everest at 8,848m)

The world looks pretty dark, but there's lots of detail, except in the oceans:

etopo_remap_table

SF initially looks very dark:

etopo_remap_sf_table

But when the contrast is stretched, you can see there's lots of detail present - on land, about as much as "exponential", but in the water clearly less so.

etopo_remap_sfeq_table

from joerd.

bcamper avatar bcamper commented on August 11, 2024

Nice, very interesting. Since the main purpose of the 8-bit height channel is to distinguish between different terrain types (if height is being used for more than basic styling, the user would probably be better off with the higher resolution elevation tiles), maybe we would be better off just deriving an integer that maps to a table of types? I don't know how practical this is, but could also lend itself to future enhancement from analyzing aerial imagery, etc. (and I believe LIDAR data often has this type of encoding?).

from joerd.

zerebubuth avatar zerebubuth commented on August 11, 2024

We can't really detect the terrain type from height data - even assuming height < 0 means water gets into trouble in the Netherlands. For landuse type, we'd want to integrate a dataset specifically for that. But I feel like we're starting to wander into custom, special-purpose, just-for-us data, and I think thats out of scope for Joerd.

from joerd.

bcamper avatar bcamper commented on August 11, 2024

I agree it's probably out of scope for this repo/current project, but I don't think it's data that would be just for us! But yeah, I get that this needs to be derived from other datasets and not just height. It also just leaves me wondering how useful height is going to be for styling, period (regardless of resolution)?

from joerd.

zerebubuth avatar zerebubuth commented on August 11, 2024

I think hill-shading is pretty crucial for some types of map styles - particularly ones related to outdoor activities. Height data is also wonderful for drawing contours, which are useful on many kinds of map, although that may be better in a separate vector tile set.

Hypsometric tinting is "good enough" for many purposes, but it's never going to be able to distinguish between, say, the Sahara and Nebraska as they have the same (mean) elevation, but very different ground cover. That being said, I'm sure cartographers would rather have the ability to do hypsometric tinting than nothing at all.

from joerd.

bcamper avatar bcamper commented on August 11, 2024

Yep. So... we're left picking the best 8-bit approximation for now? :)

On Fri, Feb 12, 2016 at 12:47 PM, Matt Amos [email protected]
wrote:

I think hill-shading is pretty crucial for some types of map styles -
particularly ones related to outdoor activities. Height data is also
wonderful for drawing contours, which are useful on many kinds of map,
although that may be better in a separate vector tile set.

Hypsometric tinting is "good enough" for many purposes, but it's never
going to be able to distinguish between, say, the Sahara and Nebraska as
they have the same (mean) elevation, but very different ground cover. That
being said, I'm sure cartographers would rather have the ability to do
hypsometric tinting than nothing at all.


Reply to this email directly or view it on GitHub
#31 (comment).

from joerd.

nvkelso avatar nvkelso commented on August 11, 2024

Table based approach looks fantastic!

from joerd.

bcamper avatar bcamper commented on August 11, 2024

@blair1618 your thoughts?

from joerd.

matteblair avatar matteblair commented on August 11, 2024

If we don't mind mandating a set of elevation thresholds, the table-based approach would give us the greatest control over how to best use the limited precision we have. It could also be advantageous for style authoring to have "human readable" thresholds where the data is known to match the source values.

Technically these all seem feasible, though the table approach is slightly more complicated than the others.

from joerd.

meetar avatar meetar commented on August 11, 2024

from joerd.

meetar avatar meetar commented on August 11, 2024

For the record: yes there are definitely those among us who are very interested in displacement mapping, and the table-based approach seems reasonable assuming good documentation, and also I can't believe you waited til I was out for a week to have this discussion 🐧

from joerd.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.