Monday, February 3, 2020

Salesforce Tips




selective query has at least one query filter that is on an indexed field and reduces the number of rows returned below the system threshold. When a field is indexed, its values are stored in a more efficient data structure. This takes up more space but improves performance when at least two filters with indexed fields are used in a query.

Fields that are indexed by default include:

  • Primary keys: Id, Name, Owner, Email (contacts, leads)
  • Foreign keys: lookup or master-detail relationships
  • Audit dates: SystemModStamp, CreatedDate
  • Custom fields: External ID (Auto Number, Email, Number, Text), Unique
  • LastModifiedDate is automatically updated whenever a user creates or updates the record. LastModifiedDate can be updated to any back-dated value if your business requires preserving original timestamps when migrating data into Salesforce.
  • SystemModStamp is strictly read-only. Not only is it updated when a user updates the record, but also when automated system processes (such as triggers and workflow actions) update the record. Because of this behavior, it creates a difference in stored value where ‘LastModifiedDate <= SystemModStamp’ but never ‘LastModifiedDate > SystemModStamp’.
 

How can LastModifiedDate filters affect SOQL performance?

So, how does this affect performance of a SOQL query? Under the hood, the SystemModStamp is indexed, but LastModifiedDate is not. The Salesforce query optimizer will intelligently attempt to use the index on SystemModStamp even when the SOQL query filters on LastModifiedDate. However, the query optimizer cannot use the index if the SOQL query filter uses LastModifiedDate to determine the upper boundary of a date range because SystemModStamp can be greater (i.e., a later date) than LastModifiedDate. This is to avoid missing records that fall in between the two timestamps.
Let’s work through an example to make this clear.



REST API
/vXX.X/limits/recordCount?sObjects=Object List

Please note that the length and decimal places are only enforced when editing data via the standard web UI. (i.e., Custom object | New field | Data type: Number | Check the fields - length and decimal places)

Apex and API methods can actually save records with decimal places. This is true for standard and custom fields. Salesforce changes the display to match the definition, but they are stored in the database as inserted.

When the user sets the precision in custom fields in the Salesforce application, it displays the precision set by the user, even if the user enters a more precise value than defined for those fields. However, when you set the precision in custom fields using the API, no rounding occurs when the user retrieves the number field.

https://justinyue.wordpress.com/2015/09/12/salesforce-data-modelling-tip-create-composite-key-for-your-custom-object/
In Salesforce, the Id field is the primary key for any SObject. Users can also create a custom text field and make it unique, but users cannot create a composite key for a SObject.
You create two Lookup fields on the Registration__c object, one for Student__c and one for Course__c. You also need to enforce a business rule of which one student can only register for one course.
If you can make the composite key on Registration__c object which includes the Id for Student__c and Course__c, your goal is achieved. Since there’s no OOTB composite key feature for you, you need be creative to figure out an alternative way. Here is the solution for you:
  1. Create a Text field called “Key__c” on Registration__c and make it Unique.
  2. Create a trigger on Registration__c SObject and listen on Before Insert and Before Update events.
  3. In the trigger, assign “Key__c” field with the concatenated value from Student__c and Course__c Id fields.
https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_describesobjects_describesobjectresult.htm
https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/field_types.htm#i1435616
With rare exceptions, all objects in the API have a field of type ID. The field is named Id and contains a unique identifier for each record in the object. It is analogous to a primary key in relational databases. When you create() a new record, the Web service generates an ID value for the record, ensuring that it is unique within your organization’s data. You cannot use the update() call on ID fields. Because the ID value stays constant over the lifetime of the record, you can refer to the record by its ID value in subsequent API calls. Also, the ID value contains a three-character code that identifies the object type, which client applications can retrieve via the describeSObjects() call.
In addition, certain objects, including custom objects, have one or more fields of type reference that contain the ID value for a related record. These fields have names that end in the suffix “Id”, for example, OwnerId in the account object. OwnerId contains the ID of the user who owns that object. Unlike the field named Idreference fields are analogous to foreign keys and can be changed via the update() call. For more information, see Reference Field Type.
Some API calls, such as retrieve() and delete(), accept an array of IDs as parameters—each array element uniquely identifies the row to retrieve or delete. Similarly, the update() call accepts an array of sObject records—each sObject contains an Id field that uniquely identifies the sObject.
https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_intro.htm
https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/asynch_api_using_bulk_query.htm

https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/async_api_headers_enable_pk_chunking.htm
Use the PK Chunking request header to enable automatic primary key (PK) chunking for a bulk query job. PK chunking splits bulk queries on very large tables into chunks based on the record IDs, or primary keys, of the queried records.
Each chunk is processed as a separate batch that counts toward your daily batch limit, and you must download each batch’s results separately. PK chunking works only with queries that don’t include SELECT clauses or conditions other than WHERE.
PK chunking is supported for the following objects: Account, Asset, Campaign, CampaignMember, Case, CaseArticle, CaseHistory, Contact, Event, EventRelation, Lead, LoginHistory, Opportunity, Task, User, WorkOrder, WorkOrderLineItem, and custom objects.

SOQL Count() query fails with OPERATION_TOO_LARGE. Why?

A Salesforce engineer was kind enough to reply, so I thought I would post the answer here for everyone to benefit.

I will summarize what confused me about this problem.  Since it's just a Count() query, I expected salesforce to be able to handle an unlimited size in O(1) time.  After all, it just needs to return the last row number.  But depending on settings, salesforce may need to do a security calculation for each row, so internally it actually has to visit each row in case some of them are culled from my view.

From SFDC engineering:

OPERATION_TOO_LARGE
The query has returned too many results. Some queries, for example those on objects that use a polymorphic foreign key like Task (or Note in your case), if run by a user without the "View All Data" permission, would require sharing rule checking if many records were returned. Such queries return this exception because the operation requires too many resources. To correct, add filters to the query to narrow the scope, or use filters such as date ranges to break the query up into a series of smaller queries.

In your case a count() query is the same as returning every record at the DB level so if your count returns > 20K records then it is really the same as returning all that data from the DB perspective.  After all, the access grants still have to be calculated to return an accurate count.


https://success.salesforce.com/ideaView?id=08730000000LhBNAA0


Currently datetime fields don't support millisecond precision. Even if you work with Datetime objects (which do support millisecond precision), when you store them in database the milliseconds are lost.

This is a problem if you need to work with a high time precision, that can be worked around in some ways, for example storing the Unix time in a number field, but it seems to be much more natural that a Datetime field would be able of storing such precision.

Labels

Review (572) System Design (334) System Design - Review (198) Java (189) Coding (75) Interview-System Design (65) Interview (63) Book Notes (59) Coding - Review (59) to-do (45) Linux (43) Knowledge (39) Interview-Java (35) Knowledge - Review (32) Database (31) Design Patterns (31) Big Data (29) Product Architecture (28) MultiThread (27) Soft Skills (27) Concurrency (26) Cracking Code Interview (26) Miscs (25) Distributed (24) OOD Design (24) Google (23) Career (22) Interview - Review (21) Java - Code (21) Operating System (21) Interview Q&A (20) System Design - Practice (20) Tips (19) Algorithm (17) Company - Facebook (17) Security (17) How to Ace Interview (16) Brain Teaser (14) Linux - Shell (14) Redis (14) Testing (14) Tools (14) Code Quality (13) Search (13) Spark (13) Spring (13) Company - LinkedIn (12) How to (12) Interview-Database (12) Interview-Operating System (12) Solr (12) Architecture Principles (11) Resource (10) Amazon (9) Cache (9) Git (9) Interview - MultiThread (9) Scalability (9) Trouble Shooting (9) Web Dev (9) Architecture Model (8) Better Programmer (8) Cassandra (8) Company - Uber (8) Java67 (8) Math (8) OO Design principles (8) SOLID (8) Design (7) Interview Corner (7) JVM (7) Java Basics (7) Kafka (7) Mac (7) Machine Learning (7) NoSQL (7) C++ (6) Chrome (6) File System (6) Highscalability (6) How to Better (6) Network (6) Restful (6) CareerCup (5) Code Review (5) Hash (5) How to Interview (5) JDK Source Code (5) JavaScript (5) Leetcode (5) Must Known (5) Python (5)

Popular Posts