Parallel computing is the simultaneous use of multiple compute resources to solve a computational problem faster. It allows for larger problems to be solved and provides cost savings over serial computing. There are different models of parallelism including data parallelism and task parallelism. Flynn's taxonomy categorizes computer architectures as SISD, SIMD, MISD and MIMD based on how instructions and data are handled. Shared memory and distributed memory are two common architectures that differ in scalability and communication handling. Programming models include shared memory, message passing and data parallel approaches. Design considerations for parallel programs include partitioning work, communication between processes, and synchronization.