Does dynamic field access cost grow with the number of fields?

Will the cost to access individual dynamic fields on an object grow as more dynamic fields are added to the object? No, one of the benefits of dynamic fields is that when it comes to access, you pay for what you use, so the cost of accessing fields on an object grows with the number of field accesses, not the number of fields that could be accessed.

This is in contrast to non-dynamic fields (or structures that grow, like vectors) whose cost to load will increase the bigger they get.

Originally from a thread in Discord:

I want to restrict one user to create only one object. Then I need to create a registration object to record how many objects have been created and bind a dynamic object to record which users have created the object. I have seen that the dynamic fields under the registration object are all nft representation, so if a large number of users have created the object, There are many nfts in the dynamic fields under the registered object. Does this affect the check of dynamic fields? such as exists_with_type check

My Test Code like this:

public entry fun create_warrior(reg: &mut WarriorRegistry , ctx: &mut TxContext) {
       assert!(!df::exists_with_type<address, bool>(&reg.id, tx_context::sender(ctx)) ,0);
       let warrior = SimpleWarrior {
           id: object::new(ctx),
           sword: option::none(),
           shield: option::none(),
       };
       transfer::transfer(warrior, tx_context::sender(ctx));
       reg.warrior_born = reg.warrior_born + 1;
       df::add<address, bool>(&mut reg.id, tx_context::sender(ctx), true);
 }

Whether this design is suitable for large-scale user scenarios?

When the amount of data is very large, will this judgment “assert!(!df::exists_with_type<address, bool>(&reg.id, tx_context::sender(ctx)) ,0);” be affected? For example, whether the data is too large to judge at all, or whether it needs to consume a large gas fee?

21 Likes