Comments (4)
Are these new fs_storage modules ready and safe for production for v16?
I'm looking forward to testing this with s3 bucket (WASABI S3 provider and Min.io provider) for some projects.
I have a dev/staging ready for testing but if it's not ready yet, it's no point for me to move forward.
I see basically 3 modules fs_storage, fs_file and fs_attachments. Are all of them required together?
I'm working in a container environment over multiple servers, so I suppose fs_attachment is defintely required.
fs_storage is the "base" I suppose?
The only confusion is fs_file which is also showing alpha status still.
Also, is there any docs on how I can explicitly exclude/ignore certain images like thumbnails etc...?
I want to avoid that small images are also loading from an external S3 bucket. I would like those to load from the database instead of the default filestore on server.
Any pointers and feedback please?
from storage.
There hasn't been any activity on this issue in the past 6 months, so it has been marked as stale and it will be closed automatically if no further activity occurs in the next 30 days.
If you want this issue to never become stale, please ask a PSC member to apply the "no stale" label.
from storage.
@lmignon Shouldn't migration scripts be created in those modules (for those people that come from v15)?
from storage.
@MiquelRForgeFlow I've developed my own migration script to migrate from 10 to 16. It's not trivial but here it's the SQL...
migration of product images and product medias
# Copyright 2023 ACSONE SA/NV
# License AGPL-3.0 or later (https://www.gnu.org/licenses/agpl).
import logging
_logger = logging.getLogger(__name__)
def migrate_product_media(cr):
# create a temporary column in fs_product_media to store the old storage file id
cr.execute(
"""
alter table fs_product_media add column x_old_storage_file_id integer;
alter table fs_media add column x_old_storage_file_id integer;
"""
)
_logger.info("Create fs_product_media records from storage_file")
cr.execute(
"""
INSERT INTO fs_product_media (
product_tmpl_id,
x_old_storage_file_id,
lang,
sequence,
write_uid,
write_date,
create_uid,
create_date,
name,
mimetype
)
SELECT
r.product_tmpl_id,
f.id,
f.lang,
r.sequence,
f.write_uid,
f.write_date,
f.create_uid,
f.create_date,
f.name,
f.mimetype
FROM
product_media_relation as r,
storage_media as s,
storage_file as f
WHERE
r.media_id = s.id
AND s.file_id = f.id;
"""
)
_logger.info("%s fs_product_media records created", cr.rowcount)
# create fs_media records for media that are linked to more thant one product_media_relation
cr.execute(
"""
INSERT INTO fs_media (
x_old_storage_file_id,
lang,
write_uid,
write_date,
create_uid,
create_date,
name,
mimetype
)
SELECT
f.id,
f.lang,
f.write_uid,
f.write_date,
f.create_uid,
f.create_date,
f.name,
f.mimetype
FROM
storage_media as s,
storage_file as f
WHERE
s.file_id = f.id
AND s.id IN (
SELECT
media_id
FROM
product_media_relation
GROUP BY
media_id
HAVING
count(*) > 1
);
"""
)
_logger.info("%s fs_media records created", cr.rowcount)
_logger.info("Link fs_product_media to fs_media")
cr.execute(
"""
UPDATE
fs_product_media as pm
SET
media_id = fs_media.id,
link_existing = True
FROM
fs_media
WHERE
pm.x_old_storage_file_id = fs_media.x_old_storage_file_id;
"""
)
cr.execute("select id from fs_storage where code = 'fsprd_eshop'")
fs_storage_id = cr.fetchone()[0]
_logger.info(
"Create ir_attachment records from storage_file linked to one product_media_relation"
)
cr.execute(
"""
INSERT INTO ir_attachment (
name,
type,
res_model,
res_id,
res_field,
create_uid,
create_date,
write_uid,
write_date,
store_fname,
mimetype,
file_size,
checksum,
fs_storage_id,
fs_url,
fs_storage_code,
fs_filename
)
SELECT
f.name,
'binary',
'fs.product.media',
fs_product_media.id,
'specific_file',
f.create_uid,
f.create_date,
f.write_uid,
f.write_date,
concat('fsprd_eshop://', f.slug),
f.mimetype,
f.file_size,
f.checksum,
%s,
f.url,
'fsprd_eshop',
f.slug
FROM
product_media_relation as r,
storage_media as s,
storage_file as f,
fs_product_media
WHERE
fs_product_media.x_old_storage_file_id = f.id
AND fs_product_media.media_id is null
AND r.media_id = s.id
AND s.file_id = f.id
""",
(fs_storage_id,),
)
_logger.info("%s ir_attachment records created", cr.rowcount)
_logger.info(
"Create ir_attachment records from storage_file linked to one fs_media"
)
cr.execute(
"""
INSERT INTO ir_attachment (
name,
type,
res_model,
res_id,
res_field,
create_uid,
create_date,
write_uid,
write_date,
store_fname,
mimetype,
file_size,
checksum,
fs_storage_id,
fs_url,
fs_storage_code,
fs_filename
)
SELECT
f.name,
'binary',
'fs.media',
fs_media.id,
'file',
f.create_uid,
f.create_date,
f.write_uid,
f.write_date,
concat('fsprd_eshop://', f.slug),
f.mimetype,
f.file_size,
f.checksum,
%s,
f.url,
'fsprd_eshop',
f.slug
FROM
storage_file as f,
fs_media
WHERE
fs_media.x_old_storage_file_id = f.id
""",
(fs_storage_id,),
)
_logger.info("%s ir_attachment records created", cr.rowcount)
def migrate_product_image(cr):
cr.execute(
"""
alter table fs_product_image add column x_old_storage_file_id integer;
alter table fs_image add column x_old_storage_file_id integer;
"""
)
_logger.info("Create fs_product_image records from storage_file")
cr.execute(
"""
INSERT INTO fs_product_image (
product_tmpl_id,
x_old_storage_file_id,
sequence,
write_uid,
write_date,
create_uid,
create_date,
name,
mimetype
)
SELECT
r.product_tmpl_id,
f.id,
r.sequence,
f.write_uid,
f.write_date,
f.create_uid,
f.create_date,
f.name,
f.mimetype
FROM
product_image_relation as r,
storage_image as s,
storage_file as f
WHERE
r.image_id = s.id
AND s.file_id = f.id;
"""
)
_logger.info("%s fs_product_image records created", cr.rowcount)
# create fs_image records for image that are linked to more thant one product_image_relation
cr.execute(
"""
INSERT INTO fs_image (
x_old_storage_file_id,
write_uid,
write_date,
create_uid,
create_date,
name,
mimetype
)
SELECT
f.id,
f.write_uid,
f.write_date,
f.create_uid,
f.create_date,
f.name,
f.mimetype
FROM
storage_image as s,
storage_file as f
WHERE
s.file_id = f.id
AND s.id IN (
SELECT
image_id
FROM
product_image_relation
GROUP BY
image_id
HAVING
count(*) > 1
);
"""
)
_logger.info("%s fs_image records created", cr.rowcount)
_logger.info("Link fs_product_image to fs_image")
cr.execute(
"""
UPDATE
fs_product_image as pm
SET
image_id = fs_image.id,
link_existing = True
FROM
fs_image
WHERE
pm.x_old_storage_file_id = fs_image.x_old_storage_file_id;
"""
)
cr.execute("select id from fs_storage where code = 'fsprd_eshop'")
fs_storage_id = cr.fetchone()[0]
_logger.info(
"Create ir_attachment records from storage_file linked to one product_image_relation"
)
cr.execute(
"""
INSERT INTO ir_attachment (
name,
type,
res_model,
res_id,
res_field,
create_uid,
create_date,
write_uid,
write_date,
store_fname,
mimetype,
file_size,
checksum,
fs_storage_id,
fs_url,
fs_storage_code,
fs_filename
)
SELECT
f.name,
'binary',
'fs.product.image',
fs_product_image.id,
'specific_image',
f.create_uid,
f.create_date,
f.write_uid,
f.write_date,
concat('fsprd_eshop://', f.slug),
f.mimetype,
f.file_size,
f.checksum,
%s,
f.url,
'fsprd_eshop',
f.slug
FROM
product_image_relation as r,
storage_image as s,
storage_file as f,
fs_product_image
WHERE
fs_product_image.x_old_storage_file_id = f.id
AND fs_product_image.image_id is null
AND r.image_id = s.id
AND s.file_id = f.id
""",
(fs_storage_id,),
)
_logger.info("%s ir_attachment records created", cr.rowcount)
_logger.info(
"Create ir_attachment records from storage_file linked to one fs_image"
)
cr.execute(
"""
INSERT INTO ir_attachment (
name,
type,
res_model,
res_id,
res_field,
create_uid,
create_date,
write_uid,
write_date,
store_fname,
mimetype,
file_size,
checksum,
fs_storage_id,
fs_url,
fs_storage_code,
fs_filename
)
SELECT
f.name,
'binary',
'fs.image',
fs_image.id,
'image',
f.create_uid,
f.create_date,
f.write_uid,
f.write_date,
concat('fsprd_eshop://', f.slug),
f.mimetype,
f.file_size,
f.checksum,
%s,
f.url,
'fsprd_eshop',
f.slug
FROM
storage_file as f,
fs_image
WHERE
fs_image.x_old_storage_file_id = f.id
""",
(fs_storage_id,),
)
_logger.info("%s ir_attachment records created", cr.rowcount)
_logger.info("Link main image on product_template")
cr.execute(
"""
UPDATE
product_template
SET
main_image_id = sub.id
FROM (
SELECT
fs.id,
pt.id as product_tmpl_id
FROM
fs_product_image fs,
product_template pt,
storage_image si
WHERE
fs.x_old_storage_file_id = si.file_id
AND si.id = pt.x_main_image_id
) AS sub
WHERE
sub.product_tmpl_id = product_template.id;
"""
)
_logger.info("%s main image linked", cr.rowcount)
_logger.info("Link main image on product_product")
cr.execute(
"""
UPDATE
product_product
SET
main_image_id = tmpl.main_image_id
FROM product_template tmpl
WHERE
tmpl.id = product_product.product_tmpl_id;
"""
)
_logger.info("%s main image linked", cr.rowcount)
_logger.info("Link variant image on product_product")
cr.execute(
"""
INSERT INTO fs_product_image_product_product_rel (
product_product_id,
fs_product_image_id
)
SELECT
pp.id,
fs.id
FROM
product_product pp,
product_template pt,
fs_product_image fs
WHERE
pp.product_tmpl_id = pt.id
AND fs.product_tmpl_id = pt.id;
"""
)
_logger.info("%s variant image linked", cr.rowcount)
_logger.info("Migrate image_medium for fs_product_image")
cr.execute(
"""
INSERT INTO ir_attachment (
name,
type,
res_model,
res_id,
res_field,
create_uid,
create_date,
write_uid,
write_date,
store_fname,
mimetype,
file_size,
checksum,
fs_storage_id,
fs_url,
fs_storage_code,
fs_filename
)
SELECT
f.name,
'binary',
'fs.product.image',
fsi.id,
'specific_image_medium',
f.create_uid,
f.create_date,
f.write_uid,
f.write_date,
concat('fsprd_eshop://', f.slug),
f.mimetype,
f.file_size,
f.checksum,
%s,
f.url,
'fsprd_eshop',
f.slug
FROM
storage_file f,
fs_product_image fsi,
storage_image si,
storage_thumbnail st
WHERE
f.id = si.file_id
AND si.file_id = fsi.x_old_storage_file_id
AND st.res_model = 'storage.image'
AND si.id = st.res_id
AND st.size_x = 128;
""",
(fs_storage_id,),
)
_logger.info("%s ir_attachment records created", cr.rowcount)
_logger.info("Migrate image_medium for fs_image")
cr.execute(
"""
INSERT INTO ir_attachment (
name,
type,
res_model,
res_id,
res_field,
create_uid,
create_date,
write_uid,
write_date,
store_fname,
mimetype,
file_size,
checksum,
fs_storage_id,
fs_url,
fs_storage_code,
fs_filename
)
SELECT
f.name,
'binary',
'fs.image',
fsi.id,
'image_medium',
f.create_uid,
f.create_date,
f.write_uid,
f.write_date,
concat('fsprd_eshop://', f.slug),
f.mimetype,
f.file_size,
f.checksum,
%s,
f.url,
'fsprd_eshop',
f.slug
FROM
storage_file f,
fs_image fsi,
storage_image si,
storage_thumbnail st
WHERE
f.id = si.file_id
AND si.file_id = fsi.x_old_storage_file_id
AND st.res_model = 'storage.image'
AND si.id = st.res_id
AND size_x = 128;
""",
(fs_storage_id,),
)
def migrate(cr, version):
migrate_product_media(cr)
migrate_product_image(cr)
migration of se thumbnails thumbnails...
# Copyright 2023 ACSONE SA/NV
# License AGPL-3.0 or later (https://www.gnu.org/licenses/agpl).
import logging
_logger = logging.getLogger(__name__)
def migrate_thumbnail(cr):
_logger.info("Create Seach Engine Thumbnails")
# create a temporary column in es_thumbail to store the old storage file id
cr.execute("ALTER TABLE se_thumbnail ADD COLUMN x_old_storage_file_id integer")
# create se_thumbnail records for storage_thumbnail with size x in (550, 300, 60)
# The sizes are the one used for the website only
# create thumb for fs_image
cr.execute(
"""
WITH attachment_id_file_id AS (
SELECT
att.id,
fs.x_old_storage_file_id
FROM
ir_attachment AS att
JOIN
fs_image fs
ON
fs.id = att.res_id
AND att.res_model='fs.image'
AND att.res_field='image'
)
INSERT INTO se_thumbnail (
size_x,
size_y,
mimetype,
name,
base_name,
attachment_id,
create_uid,
create_date,
write_date,
x_old_storage_file_id
)
SELECT
size_x,
size_y,
file_thumbnail.mimetype,
file_thumbnail.name,
th_info.url_key AS base_name,
att.id AS attachment_id,
th_info.create_uid,
th_info.create_date,
th_info.write_date,
file_thumbnail.id AS x_old_storage_file_id
FROM
storage_thumbnail th_info
JOIN storage_image image_origin ON th_info.res_id =image_origin.id
JOIN storage_file file_origin ON image_origin.file_id = file_origin.id
JOIN storage_file file_thumbnail ON th_info.file_id=file_thumbnail.id
JOIN fs_image as fs_image_origin ON fs_image_origin.x_old_storage_file_id = file_origin.id
JOIN attachment_id_file_id AS att ON att.x_old_storage_file_id = file_origin.id
WHERE size_x IN (550, 300, 60)
"""
)
_logger.info("%s se_thumbnail records created", cr.rowcount)
# create thumb for fs_product_image
cr.execute(
"""
WITH attachment_id_file_id AS (
SELECT
att.id,
fs.x_old_storage_file_id
FROM
ir_attachment AS att
JOIN
fs_product_image fs
ON
fs.id = att.res_id
AND att.res_model='fs.product.image'
AND att.res_field='specific_image'
)
INSERT INTO se_thumbnail (
size_x,
size_y,
mimetype,
name,
base_name,
attachment_id,
create_uid,
create_date,
write_date,
x_old_storage_file_id
)
SELECT
size_x,
size_y,
file_thumbnail.mimetype,
file_thumbnail.name,
th_info.url_key AS base_name,
att.id AS attachment_id,
th_info.create_uid,
th_info.create_date,
th_info.write_date,
file_thumbnail.id AS x_old_storage_file_id
FROM
storage_thumbnail th_info
JOIN storage_image image_origin ON th_info.res_id =image_origin.id
JOIN storage_file file_origin ON image_origin.file_id = file_origin.id
JOIN storage_file file_thumbnail ON th_info.file_id=file_thumbnail.id
JOIN fs_product_image as fs_image_origin on fs_image_origin.x_old_storage_file_id = file_origin.id
JOIN attachment_id_file_id AS att ON att.x_old_storage_file_id = file_origin.id
WHERE size_x IN (550, 300, 60)
"""
)
_logger.info("%s se_thumbnail records created", cr.rowcount)
cr.execute("select id from fs_storage where code = 'fsprd_eshop'")
fs_storage_id = cr.fetchone()[0]
# create ir_attachment records for storage_thumbnail with size x in (550, 300, 60)
cr.execute(
"""
INSERT INTO ir_attachment (
name,
type,
res_model,
res_id,
res_field,
create_uid,
create_date,
write_uid,
write_date,
store_fname,
mimetype,
file_size,
checksum,
fs_storage_id,
fs_url,
fs_storage_code,
fs_filename
)
SELECT
f.name,
'binary',
'se.thumbnail',
se_thumbnail.id,
'image',
f.create_uid,
f.create_date,
f.write_uid,
f.write_date,
concat('fsprd_eshop://', f.slug),
f.mimetype,
f.file_size,
f.checksum,
%s,
f.url,
'fsprd_eshop',
f.slug
FROM
storage_file as f,
se_thumbnail
WHERE
se_thumbnail.x_old_storage_file_id = f.id
""",
(fs_storage_id,),
)
_logger.info("%s ir_attachment records created", cr.rowcount)
def migrate(cr, version):
migrate_thumbnail(cr)
from storage.
Related Issues (20)
- Storage Image widget not work HOT 1
- Migration to version 16.0 HOT 3
- Access Forbidden HOT 2
- [16.0] use fsspec for implementing backend HOT 4
- Use S3 Backend for public Website product HOT 1
- Error when creating a new attachment HOT 1
- Migration to version 17.0 HOT 4
- exisiting attachments migration HOT 5
- When directory_path is set the wrong protocol is used HOT 1
- Documents app preview issue HOT 1
- [16.0] fs_storage + EDI: connection closed after some time HOT 5
- S3 on AWS HOT 1
- fs_product_multi_image from existing db HOT 4
- Error while installing
- How does it work
- Module tries to add already existing columns when installing a second time
- Module does not work with S3 when deployed in Odoo docker container image
- [17.0] fs_attachment unable to save Attachment Rules HOT 1
- [17.0] fs_storage: Connection closed after some time HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from storage.