Hi,
the locking of a query because of uncommitted rows is a side effect of the isolation level, and is very useful to prevent row reads that are not consistent with the existing data (remember that it's a transactional DB, so it should matter the order of things) and ultimately to be able to rollback transactions if needed.
It won't lock at random, in the true sense of random. It will lock if some tx-1 tries to change data and some other tx-2 tries to read that some data that may be commited/rollbacked by tx-1.
If you can't change the isolation level, and you shouldn't do it without perfect understanding of the implications, the best is to avoid queries that may impose contingency at the database level, or in other words... commit changed/added rows ASAP, avoid queries that do full table scans, etc...
Having a long process that spans inserts and/or updates across multiple tables on a heavy used system will for sure bring the application to a crawl (simple listings take forever, etc..). The usage of with(nolock) to hide the problem will raise other issues sooner or later.
Sometimes changing the tables, or adding indexes may help. The first by making sure that the information that is read is not locked by non-related inserts/updates/deletes (a deep understanding of the work flows and technical knowledge is needed to do this correctly). The second may speed up updates/deletes and reads which may reduce some of the contingency up to a degree (but be careful when writing indexes, it may also penalize writes/updates and/or rollbacks).